Test Report: KVM_Linux_crio 19423

                    
                      7f7446252791c927139509879c70af875912dc64:2024-08-18:35842
                    
                

Test fail (32/311)

Order failed test Duration
34 TestAddons/parallel/Ingress 155.04
36 TestAddons/parallel/MetricsServer 327.89
45 TestAddons/StoppedEnableDisable 154.35
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.29
164 TestMultiControlPlane/serial/StopSecondaryNode 142.04
166 TestMultiControlPlane/serial/RestartSecondaryNode 55.74
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 403.45
171 TestMultiControlPlane/serial/StopCluster 141.58
218 TestMountStart/serial/RestartStopped 27.18
230 TestMultiNode/serial/RestartKeepsNodes 327.55
232 TestMultiNode/serial/StopMultiNode 141.27
239 TestPreload 360.02
247 TestKubernetesUpgrade 405.61
279 TestPause/serial/SecondStartNoReconfiguration 87.89
319 TestStartStop/group/old-k8s-version/serial/FirstStart 277.04
344 TestStartStop/group/no-preload/serial/Stop 139.23
346 TestStartStop/group/embed-certs/serial/Stop 139.12
355 TestStartStop/group/default-k8s-diff-port/serial/Stop 139
356 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
357 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 116.31
358 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 705.76
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.51
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.28
369 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.41
370 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.53
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 381.7
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 337.33
373 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 497.77
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 155.98
x
+
TestAddons/parallel/Ingress (155.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-483094 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-483094 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-483094 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [90ac7d42-930b-44c7-ad80-7da227b904c7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [90ac7d42-930b-44c7-ad80-7da227b904c7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.005449244s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-483094 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.811227996s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-483094 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.116
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-483094 addons disable ingress-dns --alsologtostderr -v=1: (1.638837864s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-483094 addons disable ingress --alsologtostderr -v=1: (7.70179732s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-483094 -n addons-483094
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-483094 logs -n 25: (1.150157913s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-664260                                                                     | download-only-664260 | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC | 18 Aug 24 18:39 UTC |
	| delete  | -p download-only-371992                                                                     | download-only-371992 | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC | 18 Aug 24 18:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-128446 | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC |                     |
	|         | binary-mirror-128446                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41387                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-128446                                                                     | binary-mirror-128446 | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC | 18 Aug 24 18:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC |                     |
	|         | addons-483094                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC |                     |
	|         | addons-483094                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-483094 --wait=true                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC | 18 Aug 24 18:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:41 UTC | 18 Aug 24 18:42 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | -p addons-483094                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-483094 ssh cat                                                                       | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | /opt/local-path-provisioner/pvc-512d0e6d-7527-4406-847a-81e42c2ab4b4_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | addons-483094                                                                               |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:43 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | -p addons-483094                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-483094 ip                                                                            | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | addons-483094                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-483094 ssh curl -s                                                                   | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:43 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-483094 addons                                                                        | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:43 UTC | 18 Aug 24 18:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-483094 addons                                                                        | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:43 UTC | 18 Aug 24 18:43 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-483094 ip                                                                            | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:45 UTC | 18 Aug 24 18:45 UTC |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:45 UTC | 18 Aug 24 18:45 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:45 UTC | 18 Aug 24 18:45 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:39:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:39:23.638738   15764 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:39:23.638995   15764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:39:23.639004   15764 out.go:358] Setting ErrFile to fd 2...
	I0818 18:39:23.639009   15764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:39:23.639167   15764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 18:39:23.639762   15764 out.go:352] Setting JSON to false
	I0818 18:39:23.640517   15764 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1308,"bootTime":1724005056,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:39:23.640566   15764 start.go:139] virtualization: kvm guest
	I0818 18:39:23.642597   15764 out.go:177] * [addons-483094] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 18:39:23.644246   15764 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:39:23.644248   15764 notify.go:220] Checking for updates...
	I0818 18:39:23.646745   15764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:39:23.647924   15764 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:39:23.649094   15764 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:39:23.650228   15764 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 18:39:23.651472   15764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:39:23.652807   15764 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:39:23.684403   15764 out.go:177] * Using the kvm2 driver based on user configuration
	I0818 18:39:23.685479   15764 start.go:297] selected driver: kvm2
	I0818 18:39:23.685496   15764 start.go:901] validating driver "kvm2" against <nil>
	I0818 18:39:23.685506   15764 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:39:23.686184   15764 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:39:23.686275   15764 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 18:39:23.701233   15764 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 18:39:23.701274   15764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:39:23.701466   15764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:39:23.701527   15764 cni.go:84] Creating CNI manager for ""
	I0818 18:39:23.701536   15764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 18:39:23.701549   15764 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 18:39:23.701600   15764 start.go:340] cluster config:
	{Name:addons-483094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:39:23.701704   15764 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:39:23.703620   15764 out.go:177] * Starting "addons-483094" primary control-plane node in "addons-483094" cluster
	I0818 18:39:23.704902   15764 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:39:23.704938   15764 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 18:39:23.704947   15764 cache.go:56] Caching tarball of preloaded images
	I0818 18:39:23.705012   15764 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 18:39:23.705022   15764 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 18:39:23.705334   15764 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/config.json ...
	I0818 18:39:23.705351   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/config.json: {Name:mkc1f748b6b929ccbaa374580668e65846b66e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:23.705489   15764 start.go:360] acquireMachinesLock for addons-483094: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 18:39:23.705534   15764 start.go:364] duration metric: took 30.356µs to acquireMachinesLock for "addons-483094"
	I0818 18:39:23.705558   15764 start.go:93] Provisioning new machine with config: &{Name:addons-483094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:39:23.705615   15764 start.go:125] createHost starting for "" (driver="kvm2")
	I0818 18:39:23.707168   15764 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0818 18:39:23.707289   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:39:23.707328   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:39:23.721606   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0818 18:39:23.722016   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:39:23.722592   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:39:23.722617   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:39:23.722991   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:39:23.723208   15764 main.go:141] libmachine: (addons-483094) Calling .GetMachineName
	I0818 18:39:23.723355   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:23.723549   15764 start.go:159] libmachine.API.Create for "addons-483094" (driver="kvm2")
	I0818 18:39:23.723575   15764 client.go:168] LocalClient.Create starting
	I0818 18:39:23.723612   15764 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 18:39:23.795179   15764 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 18:39:23.881029   15764 main.go:141] libmachine: Running pre-create checks...
	I0818 18:39:23.881052   15764 main.go:141] libmachine: (addons-483094) Calling .PreCreateCheck
	I0818 18:39:23.881553   15764 main.go:141] libmachine: (addons-483094) Calling .GetConfigRaw
	I0818 18:39:23.881930   15764 main.go:141] libmachine: Creating machine...
	I0818 18:39:23.881944   15764 main.go:141] libmachine: (addons-483094) Calling .Create
	I0818 18:39:23.882095   15764 main.go:141] libmachine: (addons-483094) Creating KVM machine...
	I0818 18:39:23.883649   15764 main.go:141] libmachine: (addons-483094) DBG | found existing default KVM network
	I0818 18:39:23.884343   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:23.884197   15786 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0818 18:39:23.884368   15764 main.go:141] libmachine: (addons-483094) DBG | created network xml: 
	I0818 18:39:23.884378   15764 main.go:141] libmachine: (addons-483094) DBG | <network>
	I0818 18:39:23.884390   15764 main.go:141] libmachine: (addons-483094) DBG |   <name>mk-addons-483094</name>
	I0818 18:39:23.884396   15764 main.go:141] libmachine: (addons-483094) DBG |   <dns enable='no'/>
	I0818 18:39:23.884402   15764 main.go:141] libmachine: (addons-483094) DBG |   
	I0818 18:39:23.884409   15764 main.go:141] libmachine: (addons-483094) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0818 18:39:23.884419   15764 main.go:141] libmachine: (addons-483094) DBG |     <dhcp>
	I0818 18:39:23.884428   15764 main.go:141] libmachine: (addons-483094) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0818 18:39:23.884438   15764 main.go:141] libmachine: (addons-483094) DBG |     </dhcp>
	I0818 18:39:23.884446   15764 main.go:141] libmachine: (addons-483094) DBG |   </ip>
	I0818 18:39:23.884458   15764 main.go:141] libmachine: (addons-483094) DBG |   
	I0818 18:39:23.884465   15764 main.go:141] libmachine: (addons-483094) DBG | </network>
	I0818 18:39:23.884475   15764 main.go:141] libmachine: (addons-483094) DBG | 
	I0818 18:39:23.889949   15764 main.go:141] libmachine: (addons-483094) DBG | trying to create private KVM network mk-addons-483094 192.168.39.0/24...
	I0818 18:39:23.953191   15764 main.go:141] libmachine: (addons-483094) DBG | private KVM network mk-addons-483094 192.168.39.0/24 created
	I0818 18:39:23.953217   15764 main.go:141] libmachine: (addons-483094) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094 ...
	I0818 18:39:23.953231   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:23.953184   15786 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:39:23.953252   15764 main.go:141] libmachine: (addons-483094) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:39:23.953305   15764 main.go:141] libmachine: (addons-483094) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 18:39:24.225940   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:24.225806   15786 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa...
	I0818 18:39:24.405855   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:24.405746   15786 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/addons-483094.rawdisk...
	I0818 18:39:24.405879   15764 main.go:141] libmachine: (addons-483094) DBG | Writing magic tar header
	I0818 18:39:24.405890   15764 main.go:141] libmachine: (addons-483094) DBG | Writing SSH key tar header
	I0818 18:39:24.405897   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:24.405861   15786 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094 ...
	I0818 18:39:24.406017   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094
	I0818 18:39:24.406039   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 18:39:24.406051   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094 (perms=drwx------)
	I0818 18:39:24.406064   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 18:39:24.406074   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 18:39:24.406085   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 18:39:24.406093   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 18:39:24.406101   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 18:39:24.406112   15764 main.go:141] libmachine: (addons-483094) Creating domain...
	I0818 18:39:24.406119   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:39:24.406139   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 18:39:24.406148   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 18:39:24.406160   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins
	I0818 18:39:24.406170   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home
	I0818 18:39:24.406181   15764 main.go:141] libmachine: (addons-483094) DBG | Skipping /home - not owner
	I0818 18:39:24.407035   15764 main.go:141] libmachine: (addons-483094) define libvirt domain using xml: 
	I0818 18:39:24.407048   15764 main.go:141] libmachine: (addons-483094) <domain type='kvm'>
	I0818 18:39:24.407054   15764 main.go:141] libmachine: (addons-483094)   <name>addons-483094</name>
	I0818 18:39:24.407059   15764 main.go:141] libmachine: (addons-483094)   <memory unit='MiB'>4000</memory>
	I0818 18:39:24.407064   15764 main.go:141] libmachine: (addons-483094)   <vcpu>2</vcpu>
	I0818 18:39:24.407068   15764 main.go:141] libmachine: (addons-483094)   <features>
	I0818 18:39:24.407073   15764 main.go:141] libmachine: (addons-483094)     <acpi/>
	I0818 18:39:24.407078   15764 main.go:141] libmachine: (addons-483094)     <apic/>
	I0818 18:39:24.407083   15764 main.go:141] libmachine: (addons-483094)     <pae/>
	I0818 18:39:24.407087   15764 main.go:141] libmachine: (addons-483094)     
	I0818 18:39:24.407092   15764 main.go:141] libmachine: (addons-483094)   </features>
	I0818 18:39:24.407098   15764 main.go:141] libmachine: (addons-483094)   <cpu mode='host-passthrough'>
	I0818 18:39:24.407103   15764 main.go:141] libmachine: (addons-483094)   
	I0818 18:39:24.407113   15764 main.go:141] libmachine: (addons-483094)   </cpu>
	I0818 18:39:24.407122   15764 main.go:141] libmachine: (addons-483094)   <os>
	I0818 18:39:24.407126   15764 main.go:141] libmachine: (addons-483094)     <type>hvm</type>
	I0818 18:39:24.407131   15764 main.go:141] libmachine: (addons-483094)     <boot dev='cdrom'/>
	I0818 18:39:24.407135   15764 main.go:141] libmachine: (addons-483094)     <boot dev='hd'/>
	I0818 18:39:24.407140   15764 main.go:141] libmachine: (addons-483094)     <bootmenu enable='no'/>
	I0818 18:39:24.407152   15764 main.go:141] libmachine: (addons-483094)   </os>
	I0818 18:39:24.407157   15764 main.go:141] libmachine: (addons-483094)   <devices>
	I0818 18:39:24.407162   15764 main.go:141] libmachine: (addons-483094)     <disk type='file' device='cdrom'>
	I0818 18:39:24.407169   15764 main.go:141] libmachine: (addons-483094)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/boot2docker.iso'/>
	I0818 18:39:24.407175   15764 main.go:141] libmachine: (addons-483094)       <target dev='hdc' bus='scsi'/>
	I0818 18:39:24.407180   15764 main.go:141] libmachine: (addons-483094)       <readonly/>
	I0818 18:39:24.407184   15764 main.go:141] libmachine: (addons-483094)     </disk>
	I0818 18:39:24.407190   15764 main.go:141] libmachine: (addons-483094)     <disk type='file' device='disk'>
	I0818 18:39:24.407203   15764 main.go:141] libmachine: (addons-483094)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 18:39:24.407213   15764 main.go:141] libmachine: (addons-483094)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/addons-483094.rawdisk'/>
	I0818 18:39:24.407224   15764 main.go:141] libmachine: (addons-483094)       <target dev='hda' bus='virtio'/>
	I0818 18:39:24.407251   15764 main.go:141] libmachine: (addons-483094)     </disk>
	I0818 18:39:24.407271   15764 main.go:141] libmachine: (addons-483094)     <interface type='network'>
	I0818 18:39:24.407288   15764 main.go:141] libmachine: (addons-483094)       <source network='mk-addons-483094'/>
	I0818 18:39:24.407304   15764 main.go:141] libmachine: (addons-483094)       <model type='virtio'/>
	I0818 18:39:24.407326   15764 main.go:141] libmachine: (addons-483094)     </interface>
	I0818 18:39:24.407346   15764 main.go:141] libmachine: (addons-483094)     <interface type='network'>
	I0818 18:39:24.407359   15764 main.go:141] libmachine: (addons-483094)       <source network='default'/>
	I0818 18:39:24.407368   15764 main.go:141] libmachine: (addons-483094)       <model type='virtio'/>
	I0818 18:39:24.407375   15764 main.go:141] libmachine: (addons-483094)     </interface>
	I0818 18:39:24.407406   15764 main.go:141] libmachine: (addons-483094)     <serial type='pty'>
	I0818 18:39:24.407419   15764 main.go:141] libmachine: (addons-483094)       <target port='0'/>
	I0818 18:39:24.407428   15764 main.go:141] libmachine: (addons-483094)     </serial>
	I0818 18:39:24.407437   15764 main.go:141] libmachine: (addons-483094)     <console type='pty'>
	I0818 18:39:24.407453   15764 main.go:141] libmachine: (addons-483094)       <target type='serial' port='0'/>
	I0818 18:39:24.407465   15764 main.go:141] libmachine: (addons-483094)     </console>
	I0818 18:39:24.407481   15764 main.go:141] libmachine: (addons-483094)     <rng model='virtio'>
	I0818 18:39:24.407509   15764 main.go:141] libmachine: (addons-483094)       <backend model='random'>/dev/random</backend>
	I0818 18:39:24.407529   15764 main.go:141] libmachine: (addons-483094)     </rng>
	I0818 18:39:24.407538   15764 main.go:141] libmachine: (addons-483094)     
	I0818 18:39:24.407547   15764 main.go:141] libmachine: (addons-483094)     
	I0818 18:39:24.407557   15764 main.go:141] libmachine: (addons-483094)   </devices>
	I0818 18:39:24.407571   15764 main.go:141] libmachine: (addons-483094) </domain>
	I0818 18:39:24.407594   15764 main.go:141] libmachine: (addons-483094) 
	I0818 18:39:24.413473   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:66:aa:dd in network default
	I0818 18:39:24.413926   15764 main.go:141] libmachine: (addons-483094) Ensuring networks are active...
	I0818 18:39:24.413943   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:24.414519   15764 main.go:141] libmachine: (addons-483094) Ensuring network default is active
	I0818 18:39:24.414766   15764 main.go:141] libmachine: (addons-483094) Ensuring network mk-addons-483094 is active
	I0818 18:39:24.415196   15764 main.go:141] libmachine: (addons-483094) Getting domain xml...
	I0818 18:39:24.415758   15764 main.go:141] libmachine: (addons-483094) Creating domain...
	I0818 18:39:25.783780   15764 main.go:141] libmachine: (addons-483094) Waiting to get IP...
	I0818 18:39:25.784472   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:25.784770   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:25.784804   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:25.784764   15786 retry.go:31] will retry after 289.335953ms: waiting for machine to come up
	I0818 18:39:26.075176   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:26.075649   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:26.075669   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:26.075598   15786 retry.go:31] will retry after 259.825296ms: waiting for machine to come up
	I0818 18:39:26.337111   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:26.337576   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:26.337604   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:26.337538   15786 retry.go:31] will retry after 333.382386ms: waiting for machine to come up
	I0818 18:39:26.671950   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:26.672315   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:26.672345   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:26.672296   15786 retry.go:31] will retry after 547.509595ms: waiting for machine to come up
	I0818 18:39:27.220962   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:27.221455   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:27.221484   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:27.221400   15786 retry.go:31] will retry after 625.960376ms: waiting for machine to come up
	I0818 18:39:27.849259   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:27.849689   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:27.849706   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:27.849657   15786 retry.go:31] will retry after 846.775747ms: waiting for machine to come up
	I0818 18:39:28.697533   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:28.697875   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:28.697902   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:28.697831   15786 retry.go:31] will retry after 1.174784407s: waiting for machine to come up
	I0818 18:39:29.874481   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:29.874889   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:29.874916   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:29.874842   15786 retry.go:31] will retry after 1.327652727s: waiting for machine to come up
	I0818 18:39:31.204223   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:31.204687   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:31.204718   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:31.204639   15786 retry.go:31] will retry after 1.243836663s: waiting for machine to come up
	I0818 18:39:32.449942   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:32.450370   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:32.450394   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:32.450331   15786 retry.go:31] will retry after 1.494727458s: waiting for machine to come up
	I0818 18:39:33.946788   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:33.947170   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:33.947203   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:33.947103   15786 retry.go:31] will retry after 2.279766974s: waiting for machine to come up
	I0818 18:39:36.229552   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:36.229944   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:36.229969   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:36.229899   15786 retry.go:31] will retry after 3.273425506s: waiting for machine to come up
	I0818 18:39:39.504724   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:39.505123   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:39.505156   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:39.505072   15786 retry.go:31] will retry after 3.797821303s: waiting for machine to come up
	I0818 18:39:43.306946   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:43.307352   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:43.307411   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:43.307317   15786 retry.go:31] will retry after 4.699729994s: waiting for machine to come up
	I0818 18:39:48.012080   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.012480   15764 main.go:141] libmachine: (addons-483094) Found IP for machine: 192.168.39.116
	I0818 18:39:48.012506   15764 main.go:141] libmachine: (addons-483094) Reserving static IP address...
	I0818 18:39:48.012536   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has current primary IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.012828   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find host DHCP lease matching {name: "addons-483094", mac: "52:54:00:cd:86:29", ip: "192.168.39.116"} in network mk-addons-483094
	I0818 18:39:48.081064   15764 main.go:141] libmachine: (addons-483094) DBG | Getting to WaitForSSH function...
	I0818 18:39:48.081102   15764 main.go:141] libmachine: (addons-483094) Reserved static IP address: 192.168.39.116
	I0818 18:39:48.081150   15764 main.go:141] libmachine: (addons-483094) Waiting for SSH to be available...
	I0818 18:39:48.083352   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.083696   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.083722   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.083864   15764 main.go:141] libmachine: (addons-483094) DBG | Using SSH client type: external
	I0818 18:39:48.083886   15764 main.go:141] libmachine: (addons-483094) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa (-rw-------)
	I0818 18:39:48.083914   15764 main.go:141] libmachine: (addons-483094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 18:39:48.083942   15764 main.go:141] libmachine: (addons-483094) DBG | About to run SSH command:
	I0818 18:39:48.083958   15764 main.go:141] libmachine: (addons-483094) DBG | exit 0
	I0818 18:39:48.211143   15764 main.go:141] libmachine: (addons-483094) DBG | SSH cmd err, output: <nil>: 
	I0818 18:39:48.211371   15764 main.go:141] libmachine: (addons-483094) KVM machine creation complete!
	I0818 18:39:48.211734   15764 main.go:141] libmachine: (addons-483094) Calling .GetConfigRaw
	I0818 18:39:48.212280   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:48.212472   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:48.212597   15764 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 18:39:48.212612   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:39:48.213836   15764 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 18:39:48.213850   15764 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 18:39:48.213857   15764 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 18:39:48.213866   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.215875   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.216178   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.216209   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.216345   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.216508   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.216640   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.216747   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.216879   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.217083   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.217096   15764 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 18:39:48.314363   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:39:48.314384   15764 main.go:141] libmachine: Detecting the provisioner...
	I0818 18:39:48.314394   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.316844   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.317120   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.317144   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.317345   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.317526   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.317686   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.317817   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.317955   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.318116   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.318127   15764 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 18:39:48.416096   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 18:39:48.416170   15764 main.go:141] libmachine: found compatible host: buildroot
	I0818 18:39:48.416182   15764 main.go:141] libmachine: Provisioning with buildroot...
	I0818 18:39:48.416194   15764 main.go:141] libmachine: (addons-483094) Calling .GetMachineName
	I0818 18:39:48.416458   15764 buildroot.go:166] provisioning hostname "addons-483094"
	I0818 18:39:48.416483   15764 main.go:141] libmachine: (addons-483094) Calling .GetMachineName
	I0818 18:39:48.416632   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.418922   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.419203   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.419228   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.419412   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.419595   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.419749   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.419855   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.420034   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.420200   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.420212   15764 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-483094 && echo "addons-483094" | sudo tee /etc/hostname
	I0818 18:39:48.534143   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-483094
	
	I0818 18:39:48.534167   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.536671   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.537001   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.537028   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.537246   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.537434   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.537588   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.537723   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.537888   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.538085   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.538110   15764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-483094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-483094/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-483094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:39:48.646465   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:39:48.646496   15764 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 18:39:48.646532   15764 buildroot.go:174] setting up certificates
	I0818 18:39:48.646547   15764 provision.go:84] configureAuth start
	I0818 18:39:48.646559   15764 main.go:141] libmachine: (addons-483094) Calling .GetMachineName
	I0818 18:39:48.646773   15764 main.go:141] libmachine: (addons-483094) Calling .GetIP
	I0818 18:39:48.649289   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.649644   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.649668   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.649793   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.651947   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.652262   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.652297   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.652465   15764 provision.go:143] copyHostCerts
	I0818 18:39:48.652536   15764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 18:39:48.652671   15764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 18:39:48.652789   15764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 18:39:48.652875   15764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.addons-483094 san=[127.0.0.1 192.168.39.116 addons-483094 localhost minikube]
	I0818 18:39:48.746611   15764 provision.go:177] copyRemoteCerts
	I0818 18:39:48.746665   15764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:39:48.746688   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.749096   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.749416   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.749445   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.749582   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.749804   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.749934   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.750091   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:39:48.829006   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 18:39:48.852681   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 18:39:48.882174   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 18:39:48.904650   15764 provision.go:87] duration metric: took 258.089014ms to configureAuth
	I0818 18:39:48.904678   15764 buildroot.go:189] setting minikube options for container-runtime
	I0818 18:39:48.904848   15764 config.go:182] Loaded profile config "addons-483094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:39:48.904917   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.907557   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.907919   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.907949   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.908086   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.908289   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.908446   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.908577   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.908717   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.908884   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.908901   15764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 18:39:49.159028   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 18:39:49.159052   15764 main.go:141] libmachine: Checking connection to Docker...
	I0818 18:39:49.159063   15764 main.go:141] libmachine: (addons-483094) Calling .GetURL
	I0818 18:39:49.160412   15764 main.go:141] libmachine: (addons-483094) DBG | Using libvirt version 6000000
	I0818 18:39:49.162740   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.163239   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.163281   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.163362   15764 main.go:141] libmachine: Docker is up and running!
	I0818 18:39:49.163374   15764 main.go:141] libmachine: Reticulating splines...
	I0818 18:39:49.163399   15764 client.go:171] duration metric: took 25.439815685s to LocalClient.Create
	I0818 18:39:49.163425   15764 start.go:167] duration metric: took 25.439876359s to libmachine.API.Create "addons-483094"
	I0818 18:39:49.163438   15764 start.go:293] postStartSetup for "addons-483094" (driver="kvm2")
	I0818 18:39:49.163456   15764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:39:49.163479   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.163696   15764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:39:49.163717   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:49.165582   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.165860   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.165886   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.165996   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:49.166153   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.166305   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:49.166441   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:39:49.245839   15764 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:39:49.250333   15764 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 18:39:49.250366   15764 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 18:39:49.250452   15764 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 18:39:49.250485   15764 start.go:296] duration metric: took 87.039346ms for postStartSetup
	I0818 18:39:49.250526   15764 main.go:141] libmachine: (addons-483094) Calling .GetConfigRaw
	I0818 18:39:49.251072   15764 main.go:141] libmachine: (addons-483094) Calling .GetIP
	I0818 18:39:49.253676   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.254057   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.254089   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.254276   15764 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/config.json ...
	I0818 18:39:49.254476   15764 start.go:128] duration metric: took 25.548851466s to createHost
	I0818 18:39:49.254500   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:49.256746   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.257028   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.257055   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.257256   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:49.257451   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.257684   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.257813   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:49.257968   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:49.258115   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:49.258127   15764 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 18:39:49.355954   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724006389.333647713
	
	I0818 18:39:49.355992   15764 fix.go:216] guest clock: 1724006389.333647713
	I0818 18:39:49.356004   15764 fix.go:229] Guest: 2024-08-18 18:39:49.333647713 +0000 UTC Remote: 2024-08-18 18:39:49.254487665 +0000 UTC m=+25.649012750 (delta=79.160048ms)
	I0818 18:39:49.356038   15764 fix.go:200] guest clock delta is within tolerance: 79.160048ms
	I0818 18:39:49.356049   15764 start.go:83] releasing machines lock for "addons-483094", held for 25.650504481s
	I0818 18:39:49.356073   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.356349   15764 main.go:141] libmachine: (addons-483094) Calling .GetIP
	I0818 18:39:49.358864   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.359194   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.359223   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.359407   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.359920   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.360095   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.360195   15764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:39:49.360233   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:49.360394   15764 ssh_runner.go:195] Run: cat /version.json
	I0818 18:39:49.360424   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:49.362837   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.363052   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.363165   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.363194   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.363362   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:49.363517   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.363539   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.363545   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.363708   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:49.363740   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:49.363836   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.363840   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:39:49.363920   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:49.364076   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:39:49.436520   15764 ssh_runner.go:195] Run: systemctl --version
	I0818 18:39:49.461313   15764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 18:39:49.619200   15764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 18:39:49.625374   15764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 18:39:49.625428   15764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:39:49.641459   15764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 18:39:49.641481   15764 start.go:495] detecting cgroup driver to use...
	I0818 18:39:49.641538   15764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 18:39:49.657147   15764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:39:49.670333   15764 docker.go:217] disabling cri-docker service (if available) ...
	I0818 18:39:49.670380   15764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 18:39:49.683311   15764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 18:39:49.696481   15764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 18:39:49.808487   15764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 18:39:49.956726   15764 docker.go:233] disabling docker service ...
	I0818 18:39:49.956788   15764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 18:39:49.971357   15764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 18:39:49.983840   15764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 18:39:50.117154   15764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 18:39:50.227912   15764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 18:39:50.241810   15764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:39:50.260361   15764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 18:39:50.260422   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.270751   15764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 18:39:50.270815   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.281086   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.291223   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.301506   15764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:39:50.311651   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.321763   15764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.338781   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.349726   15764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:39:50.359511   15764 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 18:39:50.359569   15764 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 18:39:50.372370   15764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:39:50.382185   15764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:39:50.488894   15764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 18:39:50.628985   15764 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 18:39:50.629085   15764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 18:39:50.633722   15764 start.go:563] Will wait 60s for crictl version
	I0818 18:39:50.633795   15764 ssh_runner.go:195] Run: which crictl
	I0818 18:39:50.637355   15764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:39:50.680758   15764 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 18:39:50.680878   15764 ssh_runner.go:195] Run: crio --version
	I0818 18:39:50.708732   15764 ssh_runner.go:195] Run: crio --version
	I0818 18:39:50.737664   15764 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 18:39:50.738950   15764 main.go:141] libmachine: (addons-483094) Calling .GetIP
	I0818 18:39:50.741592   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:50.741861   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:50.741896   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:50.742085   15764 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 18:39:50.746153   15764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:39:50.758303   15764 kubeadm.go:883] updating cluster {Name:addons-483094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 18:39:50.758402   15764 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:39:50.758443   15764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:39:50.790346   15764 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 18:39:50.790406   15764 ssh_runner.go:195] Run: which lz4
	I0818 18:39:50.794436   15764 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 18:39:50.798549   15764 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 18:39:50.798581   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 18:39:52.070103   15764 crio.go:462] duration metric: took 1.275716427s to copy over tarball
	I0818 18:39:52.070189   15764 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 18:39:54.225830   15764 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155606126s)
	I0818 18:39:54.225863   15764 crio.go:469] duration metric: took 2.155731972s to extract the tarball
	I0818 18:39:54.225872   15764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 18:39:54.263005   15764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:39:54.303622   15764 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 18:39:54.303647   15764 cache_images.go:84] Images are preloaded, skipping loading
	I0818 18:39:54.303659   15764 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.31.0 crio true true} ...
	I0818 18:39:54.303756   15764 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-483094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:39:54.303816   15764 ssh_runner.go:195] Run: crio config
	I0818 18:39:54.353832   15764 cni.go:84] Creating CNI manager for ""
	I0818 18:39:54.353857   15764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 18:39:54.353869   15764 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 18:39:54.353896   15764 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-483094 NodeName:addons-483094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 18:39:54.354017   15764 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-483094"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 18:39:54.354082   15764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:39:54.364004   15764 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 18:39:54.364078   15764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 18:39:54.373296   15764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0818 18:39:54.389563   15764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:39:54.405485   15764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0818 18:39:54.421483   15764 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0818 18:39:54.425112   15764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:39:54.436659   15764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:39:54.540906   15764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:39:54.557709   15764 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094 for IP: 192.168.39.116
	I0818 18:39:54.557734   15764 certs.go:194] generating shared ca certs ...
	I0818 18:39:54.557769   15764 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:54.557925   15764 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 18:39:54.818917   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt ...
	I0818 18:39:54.818946   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt: {Name:mkf28c86b13b0e191b3661f8445555323102f0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:54.819117   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key ...
	I0818 18:39:54.819133   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key: {Name:mkd16e1802bbd502ffcae72b3214fd821b6d043a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:54.819206   15764 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 18:39:55.005912   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt ...
	I0818 18:39:55.005941   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt: {Name:mk823029d2bfbeee25dcfc18dc5ffc6c485d4f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.006097   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key ...
	I0818 18:39:55.006108   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key: {Name:mk4a876515714dd5a8a2e980bd42506b854fafff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.006180   15764 certs.go:256] generating profile certs ...
	I0818 18:39:55.006242   15764 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.key
	I0818 18:39:55.006253   15764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt with IP's: []
	I0818 18:39:55.177647   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt ...
	I0818 18:39:55.177676   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: {Name:mkd8ad60be4220e5f64ec42ebfd4985ac651f440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.177839   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.key ...
	I0818 18:39:55.177850   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.key: {Name:mkf6c9b21a48a4a81f58a3306868d4bf4285dd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.177917   15764 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key.cdd0c4f4
	I0818 18:39:55.177935   15764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt.cdd0c4f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116]
	I0818 18:39:55.302821   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt.cdd0c4f4 ...
	I0818 18:39:55.302850   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt.cdd0c4f4: {Name:mk1d2c7b694265936e92869785bd2d5e1339bb1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.303004   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key.cdd0c4f4 ...
	I0818 18:39:55.303017   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key.cdd0c4f4: {Name:mk7e3beb525ce78e634151f988e8ac116b17d619 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.303081   15764 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt.cdd0c4f4 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt
	I0818 18:39:55.303167   15764 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key.cdd0c4f4 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key
	I0818 18:39:55.303215   15764 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.key
	I0818 18:39:55.303231   15764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.crt with IP's: []
	I0818 18:39:55.590421   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.crt ...
	I0818 18:39:55.590447   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.crt: {Name:mk049da4345d2a77eb809c59f3483b17018aad51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.590597   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.key ...
	I0818 18:39:55.590607   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.key: {Name:mk1642417638d7a02b636ce8a833da5984f461bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.590754   15764 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 18:39:55.590785   15764 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 18:39:55.590807   15764 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:39:55.590829   15764 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 18:39:55.591431   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:39:55.621255   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 18:39:55.645828   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:39:55.668983   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 18:39:55.691499   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0818 18:39:55.714434   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 18:39:55.737018   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:39:55.759628   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 18:39:55.782191   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:39:55.805843   15764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 18:39:55.823509   15764 ssh_runner.go:195] Run: openssl version
	I0818 18:39:55.829295   15764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:39:55.840130   15764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:39:55.844852   15764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:39:55.844898   15764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:39:55.850680   15764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:39:55.861143   15764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:39:55.864997   15764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 18:39:55.865044   15764 kubeadm.go:392] StartCluster: {Name:addons-483094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:39:55.865106   15764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 18:39:55.865146   15764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 18:39:55.899722   15764 cri.go:89] found id: ""
	I0818 18:39:55.899784   15764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 18:39:55.909294   15764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 18:39:55.918304   15764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 18:39:55.930221   15764 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 18:39:55.930240   15764 kubeadm.go:157] found existing configuration files:
	
	I0818 18:39:55.930283   15764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 18:39:55.939620   15764 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 18:39:55.939672   15764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 18:39:55.951370   15764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 18:39:55.963067   15764 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 18:39:55.963120   15764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 18:39:55.974776   15764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 18:39:55.986542   15764 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 18:39:55.986596   15764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 18:39:55.999070   15764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 18:39:56.007871   15764 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 18:39:56.007936   15764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 18:39:56.016670   15764 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 18:39:56.064804   15764 kubeadm.go:310] W0818 18:39:56.049340     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:39:56.065515   15764 kubeadm.go:310] W0818 18:39:56.050168     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:39:56.176448   15764 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 18:40:05.988587   15764 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 18:40:05.988648   15764 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 18:40:05.988739   15764 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 18:40:05.988865   15764 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 18:40:05.988953   15764 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 18:40:05.989029   15764 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 18:40:05.990662   15764 out.go:235]   - Generating certificates and keys ...
	I0818 18:40:05.990769   15764 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 18:40:05.990856   15764 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 18:40:05.990954   15764 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0818 18:40:05.991029   15764 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0818 18:40:05.991113   15764 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0818 18:40:05.991180   15764 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0818 18:40:05.991246   15764 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0818 18:40:05.991401   15764 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-483094 localhost] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0818 18:40:05.991482   15764 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0818 18:40:05.991640   15764 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-483094 localhost] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0818 18:40:05.991726   15764 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0818 18:40:05.991808   15764 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0818 18:40:05.991860   15764 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0818 18:40:05.991920   15764 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 18:40:05.991987   15764 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 18:40:05.992078   15764 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 18:40:05.992135   15764 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 18:40:05.992192   15764 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 18:40:05.992239   15764 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 18:40:05.992308   15764 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 18:40:05.992377   15764 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 18:40:05.993830   15764 out.go:235]   - Booting up control plane ...
	I0818 18:40:05.993931   15764 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 18:40:05.993995   15764 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 18:40:05.994068   15764 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 18:40:05.994166   15764 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 18:40:05.994265   15764 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 18:40:05.994301   15764 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 18:40:05.994408   15764 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 18:40:05.994519   15764 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 18:40:05.994618   15764 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.046274ms
	I0818 18:40:05.994684   15764 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 18:40:05.994761   15764 kubeadm.go:310] [api-check] The API server is healthy after 5.501512049s
	I0818 18:40:05.994881   15764 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 18:40:05.994989   15764 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 18:40:05.995044   15764 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 18:40:05.995201   15764 kubeadm.go:310] [mark-control-plane] Marking the node addons-483094 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 18:40:05.995273   15764 kubeadm.go:310] [bootstrap-token] Using token: 4b2dyc.0shil2r35fbxvtub
	I0818 18:40:05.996603   15764 out.go:235]   - Configuring RBAC rules ...
	I0818 18:40:05.996719   15764 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 18:40:05.996792   15764 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 18:40:05.996927   15764 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 18:40:05.997064   15764 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 18:40:05.997171   15764 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 18:40:05.997250   15764 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 18:40:05.997350   15764 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 18:40:05.997394   15764 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 18:40:05.997433   15764 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 18:40:05.997438   15764 kubeadm.go:310] 
	I0818 18:40:05.997493   15764 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 18:40:05.997503   15764 kubeadm.go:310] 
	I0818 18:40:05.997571   15764 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 18:40:05.997577   15764 kubeadm.go:310] 
	I0818 18:40:05.997602   15764 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 18:40:05.997668   15764 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 18:40:05.997724   15764 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 18:40:05.997734   15764 kubeadm.go:310] 
	I0818 18:40:05.997783   15764 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 18:40:05.997796   15764 kubeadm.go:310] 
	I0818 18:40:05.997856   15764 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 18:40:05.997863   15764 kubeadm.go:310] 
	I0818 18:40:05.997937   15764 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 18:40:05.998009   15764 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 18:40:05.998082   15764 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 18:40:05.998094   15764 kubeadm.go:310] 
	I0818 18:40:05.998216   15764 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 18:40:05.998316   15764 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 18:40:05.998323   15764 kubeadm.go:310] 
	I0818 18:40:05.998390   15764 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4b2dyc.0shil2r35fbxvtub \
	I0818 18:40:05.998479   15764 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 18:40:05.998510   15764 kubeadm.go:310] 	--control-plane 
	I0818 18:40:05.998521   15764 kubeadm.go:310] 
	I0818 18:40:05.998589   15764 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 18:40:05.998598   15764 kubeadm.go:310] 
	I0818 18:40:05.998663   15764 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4b2dyc.0shil2r35fbxvtub \
	I0818 18:40:05.998762   15764 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 18:40:05.998772   15764 cni.go:84] Creating CNI manager for ""
	I0818 18:40:05.998779   15764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 18:40:06.000289   15764 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 18:40:06.001819   15764 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 18:40:06.014544   15764 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 18:40:06.033510   15764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 18:40:06.033593   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:06.033625   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-483094 minikube.k8s.io/updated_at=2024_08_18T18_40_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=addons-483094 minikube.k8s.io/primary=true
	I0818 18:40:06.059861   15764 ops.go:34] apiserver oom_adj: -16
	I0818 18:40:06.198552   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:06.699251   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:07.198863   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:07.698790   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:08.198970   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:08.699221   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:09.199503   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:09.699325   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:10.199602   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:10.289666   15764 kubeadm.go:1113] duration metric: took 4.256135664s to wait for elevateKubeSystemPrivileges
	I0818 18:40:10.289695   15764 kubeadm.go:394] duration metric: took 14.4246545s to StartCluster
	I0818 18:40:10.289717   15764 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:40:10.289833   15764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:40:10.290293   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:40:10.290489   15764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0818 18:40:10.290514   15764 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:40:10.290566   15764 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0818 18:40:10.290667   15764 addons.go:69] Setting yakd=true in profile "addons-483094"
	I0818 18:40:10.290673   15764 addons.go:69] Setting ingress-dns=true in profile "addons-483094"
	I0818 18:40:10.290684   15764 addons.go:69] Setting default-storageclass=true in profile "addons-483094"
	I0818 18:40:10.290705   15764 addons.go:69] Setting storage-provisioner=true in profile "addons-483094"
	I0818 18:40:10.290708   15764 addons.go:69] Setting gcp-auth=true in profile "addons-483094"
	I0818 18:40:10.290723   15764 addons.go:234] Setting addon storage-provisioner=true in "addons-483094"
	I0818 18:40:10.290708   15764 addons.go:69] Setting registry=true in profile "addons-483094"
	I0818 18:40:10.290696   15764 addons.go:234] Setting addon yakd=true in "addons-483094"
	I0818 18:40:10.290751   15764 addons.go:69] Setting volumesnapshots=true in profile "addons-483094"
	I0818 18:40:10.290756   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.290760   15764 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-483094"
	I0818 18:40:10.290763   15764 addons.go:234] Setting addon registry=true in "addons-483094"
	I0818 18:40:10.290769   15764 addons.go:234] Setting addon volumesnapshots=true in "addons-483094"
	I0818 18:40:10.290779   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.290779   15764 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-483094"
	I0818 18:40:10.290794   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.290796   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.290724   15764 mustload.go:65] Loading cluster: addons-483094
	I0818 18:40:10.291029   15764 config.go:182] Loaded profile config "addons-483094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:40:10.290699   15764 addons.go:234] Setting addon ingress-dns=true in "addons-483094"
	I0818 18:40:10.291194   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291204   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291217   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291224   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.290750   15764 addons.go:69] Setting volcano=true in profile "addons-483094"
	I0818 18:40:10.291246   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291251   15764 addons.go:234] Setting addon volcano=true in "addons-483094"
	I0818 18:40:10.291274   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291325   15764 addons.go:69] Setting cloud-spanner=true in profile "addons-483094"
	I0818 18:40:10.291345   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291345   15764 addons.go:234] Setting addon cloud-spanner=true in "addons-483094"
	I0818 18:40:10.291369   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291402   15764 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-483094"
	I0818 18:40:10.291450   15764 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-483094"
	I0818 18:40:10.291475   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291544   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291578   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291609   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291650   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291665   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291709   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291732   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291736   15764 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-483094"
	I0818 18:40:10.291760   15764 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-483094"
	I0818 18:40:10.290737   15764 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-483094"
	I0818 18:40:10.290737   15764 config.go:182] Loaded profile config "addons-483094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:40:10.290739   15764 addons.go:69] Setting ingress=true in profile "addons-483094"
	I0818 18:40:10.291787   15764 addons.go:69] Setting inspektor-gadget=true in profile "addons-483094"
	I0818 18:40:10.291795   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291803   15764 addons.go:234] Setting addon inspektor-gadget=true in "addons-483094"
	I0818 18:40:10.291814   15764 addons.go:69] Setting metrics-server=true in profile "addons-483094"
	I0818 18:40:10.291824   15764 addons.go:234] Setting addon ingress=true in "addons-483094"
	I0818 18:40:10.291838   15764 addons.go:234] Setting addon metrics-server=true in "addons-483094"
	I0818 18:40:10.291851   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291859   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.292140   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292163   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.292163   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292151   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.292193   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292169   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.290731   15764 addons.go:69] Setting helm-tiller=true in profile "addons-483094"
	I0818 18:40:10.292347   15764 addons.go:234] Setting addon helm-tiller=true in "addons-483094"
	I0818 18:40:10.291639   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291370   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292218   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292494   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292517   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291815   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292524   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292593   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292751   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292774   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292947   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.293304   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.293333   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292500   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.293543   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.299622   15764 out.go:177] * Verifying Kubernetes components...
	I0818 18:40:10.292559   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.301401   15764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:40:10.312344   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0818 18:40:10.312528   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I0818 18:40:10.312566   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I0818 18:40:10.312977   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.313076   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.313136   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.313514   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.313532   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.313646   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.313664   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.313747   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.313760   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.313854   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
	I0818 18:40:10.313945   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.314010   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.314556   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.314586   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.314629   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.327855   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.327906   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.328017   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.328049   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.328088   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0818 18:40:10.328254   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I0818 18:40:10.328339   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.328360   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40231
	I0818 18:40:10.328487   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.328537   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0818 18:40:10.329089   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.329106   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.329184   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.329337   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.329348   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.329402   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.329458   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.331254   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.331422   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.331445   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.331573   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.331586   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.331637   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0818 18:40:10.332041   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.332069   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.332043   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.332270   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.333011   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.333063   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.333297   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.333462   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.333475   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.334160   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.334540   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.334582   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.334818   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.335151   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.335185   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.335389   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.335439   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.343513   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.346097   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.346122   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.346681   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.347282   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.347323   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.347826   15764 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-483094"
	I0818 18:40:10.347869   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.348212   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.348244   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.368891   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46787
	I0818 18:40:10.369933   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0818 18:40:10.370417   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.370775   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0818 18:40:10.371036   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.371061   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.371084   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0818 18:40:10.371146   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.371483   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.371549   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.371574   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.371557   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.371621   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.371667   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.372008   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.372610   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.372647   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.373025   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.373042   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.373308   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.373779   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.373821   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.374014   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.374820   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.374844   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.375232   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.375835   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.376870   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0818 18:40:10.377603   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0818 18:40:10.378068   15764 out.go:177]   - Using image docker.io/registry:2.8.3
	I0818 18:40:10.378122   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.378070   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.378835   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0818 18:40:10.378918   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.378940   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.379243   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.379288   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.379639   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.380091   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.380149   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.380685   15764 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 18:40:10.380704   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.380717   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.381042   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.381569   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.381607   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.382347   15764 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0818 18:40:10.382469   15764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:40:10.382492   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 18:40:10.382511   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.382668   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.382688   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.383715   15764 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0818 18:40:10.383737   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0818 18:40:10.383753   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.383771   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.387525   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.389154   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.389205   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.391051   15764 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0818 18:40:10.392942   15764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0818 18:40:10.392964   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0818 18:40:10.392983   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.393074   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.393097   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.393129   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.393146   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.393175   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.393203   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.393224   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.393367   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.393483   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.393705   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.393808   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.394131   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0818 18:40:10.394270   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.394544   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.394796   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35139
	I0818 18:40:10.394820   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0818 18:40:10.395536   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.395615   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.403567   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.403653   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.403736   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.403757   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.403774   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.403888   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.403900   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.404030   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.404042   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.404185   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.404196   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.404250   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
	I0818 18:40:10.404406   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.404406   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.404653   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.404651   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.404710   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.404708   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.404726   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.404922   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.405054   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.405066   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.405310   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.405348   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.405679   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.405964   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.406199   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.407820   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.407820   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.408331   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43599
	I0818 18:40:10.408712   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37363
	I0818 18:40:10.409009   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.409093   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.410167   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.410187   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.410315   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.410327   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.410350   15764 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0818 18:40:10.410536   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.410730   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.411869   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0818 18:40:10.411884   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0818 18:40:10.411887   15764 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0818 18:40:10.411895   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0818 18:40:10.411908   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.412713   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.412939   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:10.412961   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:10.414901   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:10.414943   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:10.414957   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:10.414966   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:10.414977   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:10.415293   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0818 18:40:10.415372   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0818 18:40:10.415631   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.415415   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:10.415439   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:10.415671   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	W0818 18:40:10.415740   15764 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0818 18:40:10.416037   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.416527   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.416553   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.417860   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0818 18:40:10.418141   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.418593   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.418768   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0818 18:40:10.418803   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.418823   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.419390   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.419407   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.419504   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.419532   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.419542   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.419685   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.419715   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.420240   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.420261   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.420307   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.420361   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.420607   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.420650   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.420744   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.421218   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.421458   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.421895   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.422367   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0818 18:40:10.423186   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.424016   15764 addons.go:234] Setting addon default-storageclass=true in "addons-483094"
	I0818 18:40:10.424057   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.424447   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.424481   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.424703   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43881
	I0818 18:40:10.424735   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0818 18:40:10.424753   15764 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0818 18:40:10.424794   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.425501   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.425992   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34463
	I0818 18:40:10.426319   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0818 18:40:10.426479   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.426677   15764 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 18:40:10.426693   15764 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 18:40:10.426727   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.427602   15764 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0818 18:40:10.427801   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.428048   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.427859   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.428101   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.428105   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0818 18:40:10.428484   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.428527   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0818 18:40:10.428658   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.428706   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.429146   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.429171   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.429193   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.429263   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.429409   15764 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0818 18:40:10.429423   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0818 18:40:10.429439   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.429804   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.429806   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.430011   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.430098   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.430123   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.430434   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.430981   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.431016   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.431835   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0818 18:40:10.432169   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0818 18:40:10.432299   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.432857   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.433168   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.433323   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.433349   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.434638   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.434758   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.434817   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0818 18:40:10.434884   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0818 18:40:10.436898   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0818 18:40:10.436940   15764 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0818 18:40:10.436959   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.437006   15764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0818 18:40:10.437679   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0818 18:40:10.437695   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0818 18:40:10.437709   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.437765   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.437791   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.437821   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.437875   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.437895   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.437912   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.437939   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.437978   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.439861   15764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 18:40:10.440458   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.440582   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.441034   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.441277   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.441479   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.441482   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.441502   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.441672   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.441809   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.441872   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.442020   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.442165   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.442206   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
	I0818 18:40:10.442297   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.442613   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.443114   15764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 18:40:10.443285   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.443304   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.444680   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.444685   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.444688   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.445153   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.445191   15764 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0818 18:40:10.445208   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0818 18:40:10.445226   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.445229   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.445244   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.445193   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.445416   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.446755   15764 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0818 18:40:10.446824   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.447482   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.447748   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.447863   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.448019   15764 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0818 18:40:10.448034   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0818 18:40:10.448049   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.449539   15764 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0818 18:40:10.449543   15764 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0818 18:40:10.450681   15764 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0818 18:40:10.450696   15764 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0818 18:40:10.450712   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.450771   15764 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0818 18:40:10.450783   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0818 18:40:10.450799   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.450876   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.451083   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0818 18:40:10.451501   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.451979   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.451999   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.452066   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.452081   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.452294   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.452865   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.452931   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.453133   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.453559   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.453701   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.453703   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.454697   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455193   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455339   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455525   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.455548   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455688   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.455767   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.455789   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455835   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.455948   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.456047   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.456091   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.456336   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.456364   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.456409   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.456506   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.456550   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.456617   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.456654   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.456691   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.456897   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	W0818 18:40:10.458589   15764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36068->192.168.39.116:22: read: connection reset by peer
	I0818 18:40:10.458616   15764 retry.go:31] will retry after 278.288858ms: ssh: handshake failed: read tcp 192.168.39.1:36068->192.168.39.116:22: read: connection reset by peer
	I0818 18:40:10.458664   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0818 18:40:10.458985   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.459407   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.459425   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.459721   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.459899   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.461303   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.463322   15764 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0818 18:40:10.464728   15764 out.go:177]   - Using image docker.io/busybox:stable
	I0818 18:40:10.466086   15764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0818 18:40:10.466106   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0818 18:40:10.466127   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.469146   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.469536   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.469559   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.469751   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.469955   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.470125   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.470254   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.473405   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0818 18:40:10.473762   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.474174   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.474193   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.474497   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.474671   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.475964   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.476203   15764 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 18:40:10.476225   15764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 18:40:10.476240   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.478480   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.478782   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.478807   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.478964   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.479131   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.479269   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.479407   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.644059   15764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0818 18:40:10.644061   15764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:40:10.876778   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:40:10.877783   15764 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0818 18:40:10.877805   15764 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0818 18:40:10.895768   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0818 18:40:10.895795   15764 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0818 18:40:10.940825   15764 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0818 18:40:10.940847   15764 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0818 18:40:10.945174   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0818 18:40:10.966788   15764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0818 18:40:10.966811   15764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0818 18:40:10.970421   15764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0818 18:40:10.970445   15764 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0818 18:40:11.013881   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0818 18:40:11.036409   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0818 18:40:11.038087   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0818 18:40:11.041420   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0818 18:40:11.041440   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0818 18:40:11.052462   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 18:40:11.058338   15764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 18:40:11.058378   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0818 18:40:11.060829   15764 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0818 18:40:11.060848   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0818 18:40:11.129050   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0818 18:40:11.144708   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0818 18:40:11.144735   15764 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0818 18:40:11.186428   15764 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0818 18:40:11.186452   15764 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0818 18:40:11.188625   15764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0818 18:40:11.188645   15764 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0818 18:40:11.227222   15764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0818 18:40:11.227248   15764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0818 18:40:11.292815   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0818 18:40:11.293708   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0818 18:40:11.293728   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0818 18:40:11.295523   15764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 18:40:11.295541   15764 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 18:40:11.446757   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0818 18:40:11.446781   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0818 18:40:11.467007   15764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 18:40:11.467040   15764 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 18:40:11.494022   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0818 18:40:11.494044   15764 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0818 18:40:11.499605   15764 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0818 18:40:11.499625   15764 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0818 18:40:11.519758   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0818 18:40:11.534938   15764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0818 18:40:11.534965   15764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0818 18:40:11.595607   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0818 18:40:11.595630   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0818 18:40:11.640947   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 18:40:11.714264   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0818 18:40:11.714286   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0818 18:40:11.724787   15764 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0818 18:40:11.724810   15764 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0818 18:40:11.825625   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0818 18:40:11.825645   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0818 18:40:11.898581   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0818 18:40:11.898604   15764 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0818 18:40:11.971698   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0818 18:40:11.975486   15764 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0818 18:40:11.975511   15764 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0818 18:40:12.135431   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0818 18:40:12.135456   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0818 18:40:12.271085   15764 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0818 18:40:12.271123   15764 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0818 18:40:12.275267   15764 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 18:40:12.275287   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0818 18:40:12.571351   15764 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.927247136s)
	I0818 18:40:12.571406   15764 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0818 18:40:12.571409   15764 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.927281169s)
	I0818 18:40:12.595431   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0818 18:40:12.595458   15764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0818 18:40:12.596327   15764 node_ready.go:35] waiting up to 6m0s for node "addons-483094" to be "Ready" ...
	I0818 18:40:12.605596   15764 node_ready.go:49] node "addons-483094" has status "Ready":"True"
	I0818 18:40:12.605621   15764 node_ready.go:38] duration metric: took 9.271083ms for node "addons-483094" to be "Ready" ...
	I0818 18:40:12.605633   15764 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:40:12.629445   15764 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qghrl" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:12.645263   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 18:40:12.673411   15764 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0818 18:40:12.673435   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0818 18:40:12.971810   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0818 18:40:12.971832   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0818 18:40:12.993339   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0818 18:40:13.099017   15764 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-483094" context rescaled to 1 replicas
	I0818 18:40:13.187963   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0818 18:40:13.187984   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0818 18:40:13.375718   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0818 18:40:13.375746   15764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0818 18:40:13.730558   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0818 18:40:14.728414   15764 pod_ready.go:103] pod "coredns-6f6b679f8f-qghrl" in "kube-system" namespace has status "Ready":"False"
	I0818 18:40:15.980724   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.103913036s)
	I0818 18:40:15.980795   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:15.980802   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.035583736s)
	I0818 18:40:15.980812   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:15.980837   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:15.980853   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:15.981123   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:15.981165   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:15.981177   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:15.981186   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:15.981195   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:15.981208   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:15.981219   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:15.981234   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:15.981241   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:15.981398   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:15.981416   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:15.981419   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:15.981453   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:15.981536   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:15.981552   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.231622   15764 pod_ready.go:93] pod "coredns-6f6b679f8f-qghrl" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:16.231659   15764 pod_ready.go:82] duration metric: took 3.602178342s for pod "coredns-6f6b679f8f-qghrl" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:16.231674   15764 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:16.321763   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.307843543s)
	I0818 18:40:16.321812   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.321823   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.321838   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.28539612s)
	I0818 18:40:16.321875   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.321891   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.322135   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.322191   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.322208   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:16.322218   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.322233   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.322243   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.322252   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.322211   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.322294   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.322189   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:16.322532   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.322570   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.322708   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:16.322707   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.322722   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.459344   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.459394   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.459777   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.459797   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.459835   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:17.415012   15764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0818 18:40:17.415055   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:17.417839   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:17.418310   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:17.418341   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:17.418496   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:17.418730   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:17.418898   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:17.419054   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:17.915192   15764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0818 18:40:17.947479   15764 addons.go:234] Setting addon gcp-auth=true in "addons-483094"
	I0818 18:40:17.947536   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:17.947822   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:17.947847   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:17.962205   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0818 18:40:17.962649   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:17.963126   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:17.963150   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:17.963420   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:17.963856   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:17.963881   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:17.978284   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0818 18:40:17.978639   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:17.979086   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:17.979104   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:17.979416   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:17.979597   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:17.981064   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:17.981409   15764 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0818 18:40:17.981432   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:17.983854   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:17.984262   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:17.984288   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:17.984444   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:17.984586   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:17.984713   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:17.984814   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:18.356241   15764 pod_ready.go:103] pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace has status "Ready":"False"
	I0818 18:40:18.451253   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.413131345s)
	I0818 18:40:18.451304   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451308   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.39881219s)
	I0818 18:40:18.451327   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.322249422s)
	I0818 18:40:18.451340   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451316   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451355   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.158514344s)
	I0818 18:40:18.451360   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451371   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451398   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451347   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451432   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451476   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.931691057s)
	I0818 18:40:18.451497   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451505   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451560   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.810575347s)
	I0818 18:40:18.451574   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.479845446s)
	I0818 18:40:18.451578   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451589   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451589   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451607   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451667   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.451698   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451705   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451713   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451720   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451860   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.451861   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451871   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.451874   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451884   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451885   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451892   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451893   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451894   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451901   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451904   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451909   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451911   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451917   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451956   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.451973   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451976   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451982   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451984   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451991   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451998   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.452010   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.452019   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.452385   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.452399   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.453688   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.453720   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.453728   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.453737   15764 addons.go:475] Verifying addon registry=true in "addons-483094"
	I0818 18:40:18.453851   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.453867   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454087   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.454109   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.454115   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454123   15764 addons.go:475] Verifying addon ingress=true in "addons-483094"
	I0818 18:40:18.454241   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.454252   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454281   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.454290   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454299   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.454306   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.454467   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.454494   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.454502   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454620   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.454642   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.456053   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.456063   15764 addons.go:475] Verifying addon metrics-server=true in "addons-483094"
	I0818 18:40:18.456513   15764 out.go:177] * Verifying registry addon...
	I0818 18:40:18.457420   15764 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-483094 service yakd-dashboard -n yakd-dashboard
	
	I0818 18:40:18.457427   15764 out.go:177] * Verifying ingress addon...
	I0818 18:40:18.459003   15764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0818 18:40:18.459811   15764 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0818 18:40:18.491832   15764 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0818 18:40:18.491859   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:18.491986   15764 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0818 18:40:18.491998   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:18.583725   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.583745   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.584046   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.584089   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.584106   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.938230   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.292925377s)
	W0818 18:40:18.938281   15764 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0818 18:40:18.938304   15764 retry.go:31] will retry after 190.367858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0818 18:40:18.938359   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.944977912s)
	I0818 18:40:18.938420   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.938438   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.938745   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.938765   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.938780   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.938792   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.939003   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.939035   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.939042   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.971240   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:18.972284   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:19.129633   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 18:40:19.467973   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:19.471545   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:19.971663   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:19.972230   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:20.488214   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:20.488264   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:20.567360   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.836755613s)
	I0818 18:40:20.567431   15764 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.585997647s)
	I0818 18:40:20.567436   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:20.567457   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:20.567703   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:20.567714   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:20.567723   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:20.567736   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:20.567739   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:20.567963   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:20.568005   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:20.568036   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:20.568052   15764 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-483094"
	I0818 18:40:20.569126   15764 out.go:177] * Verifying csi-hostpath-driver addon...
	I0818 18:40:20.569140   15764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 18:40:20.570838   15764 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0818 18:40:20.571469   15764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0818 18:40:20.572007   15764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0818 18:40:20.572021   15764 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0818 18:40:20.582424   15764 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0818 18:40:20.582448   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:20.719926   15764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0818 18:40:20.719951   15764 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0818 18:40:20.763552   15764 pod_ready.go:103] pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace has status "Ready":"False"
	I0818 18:40:20.841723   15764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0818 18:40:20.841745   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0818 18:40:20.898518   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0818 18:40:20.964772   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:20.965067   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:21.076292   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:21.464577   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:21.465263   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:21.508647   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.378949296s)
	I0818 18:40:21.508713   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:21.508729   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:21.508980   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:21.509000   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:21.509015   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:21.509018   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:21.509023   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:21.509461   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:21.509548   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:21.509566   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:21.576202   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:21.967825   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:21.968093   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:22.086526   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:22.157942   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.259388165s)
	I0818 18:40:22.157999   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:22.158015   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:22.158332   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:22.158380   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:22.158402   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:22.158417   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:22.158428   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:22.158664   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:22.158683   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:22.160545   15764 addons.go:475] Verifying addon gcp-auth=true in "addons-483094"
	I0818 18:40:22.162315   15764 out.go:177] * Verifying gcp-auth addon...
	I0818 18:40:22.164165   15764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0818 18:40:22.177781   15764 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0818 18:40:22.177799   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:22.463126   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:22.464104   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:22.580921   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:22.668589   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:22.738488   15764 pod_ready.go:98] pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:22 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.116 HostIPs:[{IP:192.168.39
.116}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-18 18:40:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-18 18:40:14 +0000 UTC,FinishedAt:2024-08-18 18:40:20 +0000 UTC,ContainerID:cri-o://288682b53c85897087ce2642c592e74483dc65dca3277f09f9e8d60feb273398,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://288682b53c85897087ce2642c592e74483dc65dca3277f09f9e8d60feb273398 Started:0xc0014e53d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021604b0} {Name:kube-api-access-rt4mb MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0021604f0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0818 18:40:22.738522   15764 pod_ready.go:82] duration metric: took 6.5068393s for pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace to be "Ready" ...
	E0818 18:40:22.738535   15764 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:22 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.116 HostIPs:[{IP:192.168.39.116}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-18 18:40:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-18 18:40:14 +0000 UTC,FinishedAt:2024-08-18 18:40:20 +0000 UTC,ContainerID:cri-o://288682b53c85897087ce2642c592e74483dc65dca3277f09f9e8d60feb273398,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://288682b53c85897087ce2642c592e74483dc65dca3277f09f9e8d60feb273398 Started:0xc0014e53d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021604b0} {Name:kube-api-access-rt4mb MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0021604f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0818 18:40:22.738546   15764 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.742875   15764 pod_ready.go:93] pod "etcd-addons-483094" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:22.742892   15764 pod_ready.go:82] duration metric: took 4.338015ms for pod "etcd-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.742903   15764 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.747473   15764 pod_ready.go:93] pod "kube-apiserver-addons-483094" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:22.747489   15764 pod_ready.go:82] duration metric: took 4.57942ms for pod "kube-apiserver-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.747501   15764 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.751151   15764 pod_ready.go:93] pod "kube-controller-manager-addons-483094" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:22.751167   15764 pod_ready.go:82] duration metric: took 3.658541ms for pod "kube-controller-manager-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.751177   15764 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-79skb" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.755091   15764 pod_ready.go:93] pod "kube-proxy-79skb" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:22.755107   15764 pod_ready.go:82] duration metric: took 3.923477ms for pod "kube-proxy-79skb" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.755117   15764 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.964096   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:22.964704   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:23.076867   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:23.135964   15764 pod_ready.go:93] pod "kube-scheduler-addons-483094" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:23.135990   15764 pod_ready.go:82] duration metric: took 380.864569ms for pod "kube-scheduler-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:23.136000   15764 pod_ready.go:39] duration metric: took 10.530353573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:40:23.136018   15764 api_server.go:52] waiting for apiserver process to appear ...
	I0818 18:40:23.136075   15764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:40:23.167280   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:23.187778   15764 api_server.go:72] duration metric: took 12.897229525s to wait for apiserver process to appear ...
	I0818 18:40:23.187802   15764 api_server.go:88] waiting for apiserver healthz status ...
	I0818 18:40:23.187824   15764 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0818 18:40:23.192161   15764 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0818 18:40:23.193119   15764 api_server.go:141] control plane version: v1.31.0
	I0818 18:40:23.193139   15764 api_server.go:131] duration metric: took 5.329578ms to wait for apiserver health ...
	I0818 18:40:23.193148   15764 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 18:40:23.340964   15764 system_pods.go:59] 18 kube-system pods found
	I0818 18:40:23.340998   15764 system_pods.go:61] "coredns-6f6b679f8f-qghrl" [0ad57a4a-3bea-4aae-a41d-7fbabaf0feea] Running
	I0818 18:40:23.341010   15764 system_pods.go:61] "csi-hostpath-attacher-0" [06ba1a58-4a4b-4954-9353-ec5abe630e23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0818 18:40:23.341018   15764 system_pods.go:61] "csi-hostpath-resizer-0" [432b9627-4cb3-4e74-9768-4fae94cc36dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0818 18:40:23.341029   15764 system_pods.go:61] "csi-hostpathplugin-xksf4" [2d309fa3-58bf-4a5e-8e76-38459de0b107] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0818 18:40:23.341036   15764 system_pods.go:61] "etcd-addons-483094" [245d7afe-6d36-4217-bcba-e6297ba4f1f1] Running
	I0818 18:40:23.341044   15764 system_pods.go:61] "kube-apiserver-addons-483094" [5fe8109a-a9f7-44c2-93a3-f95ca2b77e01] Running
	I0818 18:40:23.341049   15764 system_pods.go:61] "kube-controller-manager-addons-483094" [f7bf3ebf-a240-49d6-a21b-4a136a9c40ce] Running
	I0818 18:40:23.341059   15764 system_pods.go:61] "kube-ingress-dns-minikube" [7cdd6d54-a545-4f73-8e7b-95fa3aedf907] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0818 18:40:23.341065   15764 system_pods.go:61] "kube-proxy-79skb" [5e6eb18e-2c70-4df3-9ea4-f4fe95133083] Running
	I0818 18:40:23.341074   15764 system_pods.go:61] "kube-scheduler-addons-483094" [69bd15b6-8593-49b9-95b4-0db0eeb875d8] Running
	I0818 18:40:23.341083   15764 system_pods.go:61] "metrics-server-8988944d9-77bnz" [2aab5d03-7625-4a01-841b-830c70fa8ee2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 18:40:23.341092   15764 system_pods.go:61] "nvidia-device-plugin-daemonset-tvfnx" [a01a3329-cdbd-44ec-b8a3-6bc065c8505a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0818 18:40:23.341113   15764 system_pods.go:61] "registry-6fb4cdfc84-dgwqw" [067b7646-ddf6-4f0b-bc5b-f1f0f7886c10] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0818 18:40:23.341125   15764 system_pods.go:61] "registry-proxy-8h2l6" [6562d7a2-f7f9-476f-9b02-fd1cf7d752f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0818 18:40:23.341135   15764 system_pods.go:61] "snapshot-controller-56fcc65765-xhns2" [9cc6e122-f0b7-48f4-a9f4-f34bcb84c3d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:40:23.341146   15764 system_pods.go:61] "snapshot-controller-56fcc65765-xtsng" [f495d714-cc97-4377-867c-2ba6f686ce79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:40:23.341154   15764 system_pods.go:61] "storage-provisioner" [bb5b5ca7-00f4-4361-b31f-7230472ba62f] Running
	I0818 18:40:23.341166   15764 system_pods.go:61] "tiller-deploy-b48cc5f79-84wz4" [14ad1b2b-905b-495b-a83a-4e89d1a1c04f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0818 18:40:23.341174   15764 system_pods.go:74] duration metric: took 148.018507ms to wait for pod list to return data ...
	I0818 18:40:23.341185   15764 default_sa.go:34] waiting for default service account to be created ...
	I0818 18:40:23.464356   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:23.465147   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:23.535870   15764 default_sa.go:45] found service account: "default"
	I0818 18:40:23.535892   15764 default_sa.go:55] duration metric: took 194.700564ms for default service account to be created ...
	I0818 18:40:23.535901   15764 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 18:40:23.576597   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:23.667307   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:23.743940   15764 system_pods.go:86] 18 kube-system pods found
	I0818 18:40:23.743974   15764 system_pods.go:89] "coredns-6f6b679f8f-qghrl" [0ad57a4a-3bea-4aae-a41d-7fbabaf0feea] Running
	I0818 18:40:23.743985   15764 system_pods.go:89] "csi-hostpath-attacher-0" [06ba1a58-4a4b-4954-9353-ec5abe630e23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0818 18:40:23.743995   15764 system_pods.go:89] "csi-hostpath-resizer-0" [432b9627-4cb3-4e74-9768-4fae94cc36dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0818 18:40:23.744007   15764 system_pods.go:89] "csi-hostpathplugin-xksf4" [2d309fa3-58bf-4a5e-8e76-38459de0b107] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0818 18:40:23.744012   15764 system_pods.go:89] "etcd-addons-483094" [245d7afe-6d36-4217-bcba-e6297ba4f1f1] Running
	I0818 18:40:23.744020   15764 system_pods.go:89] "kube-apiserver-addons-483094" [5fe8109a-a9f7-44c2-93a3-f95ca2b77e01] Running
	I0818 18:40:23.744026   15764 system_pods.go:89] "kube-controller-manager-addons-483094" [f7bf3ebf-a240-49d6-a21b-4a136a9c40ce] Running
	I0818 18:40:23.744036   15764 system_pods.go:89] "kube-ingress-dns-minikube" [7cdd6d54-a545-4f73-8e7b-95fa3aedf907] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0818 18:40:23.744045   15764 system_pods.go:89] "kube-proxy-79skb" [5e6eb18e-2c70-4df3-9ea4-f4fe95133083] Running
	I0818 18:40:23.744051   15764 system_pods.go:89] "kube-scheduler-addons-483094" [69bd15b6-8593-49b9-95b4-0db0eeb875d8] Running
	I0818 18:40:23.744063   15764 system_pods.go:89] "metrics-server-8988944d9-77bnz" [2aab5d03-7625-4a01-841b-830c70fa8ee2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 18:40:23.744081   15764 system_pods.go:89] "nvidia-device-plugin-daemonset-tvfnx" [a01a3329-cdbd-44ec-b8a3-6bc065c8505a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0818 18:40:23.744090   15764 system_pods.go:89] "registry-6fb4cdfc84-dgwqw" [067b7646-ddf6-4f0b-bc5b-f1f0f7886c10] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0818 18:40:23.744101   15764 system_pods.go:89] "registry-proxy-8h2l6" [6562d7a2-f7f9-476f-9b02-fd1cf7d752f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0818 18:40:23.744111   15764 system_pods.go:89] "snapshot-controller-56fcc65765-xhns2" [9cc6e122-f0b7-48f4-a9f4-f34bcb84c3d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:40:23.744121   15764 system_pods.go:89] "snapshot-controller-56fcc65765-xtsng" [f495d714-cc97-4377-867c-2ba6f686ce79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:40:23.744127   15764 system_pods.go:89] "storage-provisioner" [bb5b5ca7-00f4-4361-b31f-7230472ba62f] Running
	I0818 18:40:23.744135   15764 system_pods.go:89] "tiller-deploy-b48cc5f79-84wz4" [14ad1b2b-905b-495b-a83a-4e89d1a1c04f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0818 18:40:23.744145   15764 system_pods.go:126] duration metric: took 208.238415ms to wait for k8s-apps to be running ...
	I0818 18:40:23.744158   15764 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 18:40:23.744209   15764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:40:23.786659   15764 system_svc.go:56] duration metric: took 42.495188ms WaitForService to wait for kubelet
	I0818 18:40:23.786684   15764 kubeadm.go:582] duration metric: took 13.496141334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:40:23.786705   15764 node_conditions.go:102] verifying NodePressure condition ...
	I0818 18:40:23.936424   15764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:40:23.936451   15764 node_conditions.go:123] node cpu capacity is 2
	I0818 18:40:23.936462   15764 node_conditions.go:105] duration metric: took 149.752888ms to run NodePressure ...
	I0818 18:40:23.936473   15764 start.go:241] waiting for startup goroutines ...
	I0818 18:40:23.964225   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:23.964655   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:24.076447   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:24.167807   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:24.463528   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:24.465011   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:24.575872   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:24.668428   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:24.964745   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:24.965147   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:25.076817   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:25.168303   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:25.599091   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:25.599522   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:25.600133   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:25.667715   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:25.964471   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:25.964764   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:26.076904   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:26.169344   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:26.463032   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:26.466730   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:26.576602   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:26.667992   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:26.964352   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:26.965149   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:27.076540   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:27.167425   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:27.463913   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:27.464877   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:27.579520   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:27.668739   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:27.965073   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:27.965292   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:28.075816   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:28.167918   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:28.464957   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:28.465302   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:28.575759   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:28.669052   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:28.963129   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:28.964996   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:29.076202   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:29.167636   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:29.464566   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:29.464826   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:29.575645   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:29.668202   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:29.963018   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:29.964067   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:30.077611   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:30.168965   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:30.464019   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:30.464211   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:30.576155   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:30.667741   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:30.963870   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:30.964072   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:31.078038   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:31.168539   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:31.463040   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:31.464576   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:31.576448   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:31.668237   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:31.964810   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:31.965518   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:32.078917   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:32.168272   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:32.462740   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:32.465169   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:32.576344   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:32.667806   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:32.964706   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:32.965022   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:33.076972   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:33.167558   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:33.463667   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:33.464875   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:33.577097   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:33.667991   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:33.962774   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:33.965090   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:34.182547   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:34.183015   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:34.463007   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:34.464949   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:34.577233   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:34.668306   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:34.963342   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:34.964351   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:35.076701   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:35.167912   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:35.464839   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:35.464990   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:35.577334   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:35.669400   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:35.963562   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:35.964795   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:36.076681   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:36.167642   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:36.463339   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:36.463954   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:36.577837   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:36.670343   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:36.962728   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:36.964758   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:37.076520   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:37.168338   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:37.464259   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:37.464582   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:37.575946   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:37.668552   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:37.964407   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:37.964553   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:38.077847   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:38.167940   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:38.463793   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:38.464233   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:38.576801   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:38.676708   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:38.964384   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:38.964439   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:39.076760   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:39.168369   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:39.462589   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:39.464485   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:39.576310   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:39.667431   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:39.963850   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:39.964365   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:40.076738   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:40.168132   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:40.462937   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:40.464776   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:40.576885   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:40.668804   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:40.964573   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:40.968288   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:41.077263   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:41.167367   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:41.462837   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:41.464051   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:41.576041   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:41.667255   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:41.964755   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:41.964969   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:42.077350   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:42.167266   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:42.462687   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:42.464459   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:42.576000   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:42.667306   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:42.963550   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:42.965673   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:43.077052   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:43.167476   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:43.464661   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:43.465069   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:43.575471   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:43.668128   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:43.963983   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:43.964431   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:44.075838   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:44.168374   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:44.464560   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:44.464686   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:45.044883   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:45.046452   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:45.048755   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:45.049284   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:45.075922   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:45.168814   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:45.466127   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:45.466544   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:45.576662   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:45.667800   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:45.964241   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:45.964994   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:46.078213   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:46.168271   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:46.463353   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:46.464719   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:46.576761   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:46.668038   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:46.964520   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:46.964913   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:47.076589   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:47.167333   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:47.463444   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:47.464789   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:47.576875   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:47.667770   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:47.964208   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:47.964942   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:48.075892   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:48.168264   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:48.463702   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:48.464543   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:48.577384   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:48.670694   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:48.963791   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:48.965493   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:49.076603   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:49.168111   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:49.462739   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:49.464160   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:49.575473   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:49.667518   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:49.963203   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:49.964977   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:50.076418   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:50.168220   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:50.463884   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:50.464290   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:50.576438   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:50.667868   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:50.963695   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:50.964122   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:51.077144   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:51.167361   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:51.462929   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:51.465360   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:51.575737   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:51.668172   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:51.964318   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:51.964546   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:52.076185   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:52.167112   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:52.462303   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:52.464248   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:52.576897   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:52.668369   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:52.965063   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:52.965097   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:53.076054   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:53.167164   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:53.463124   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:53.465694   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:53.576054   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:53.667825   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:53.964406   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:53.965550   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:54.076891   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:54.168621   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:54.463043   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:54.464634   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:54.576989   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:54.667680   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:54.967394   15764 kapi.go:107] duration metric: took 36.508368287s to wait for kubernetes.io/minikube-addons=registry ...
	I0818 18:40:54.967405   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:55.076899   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:55.169027   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:55.464091   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:55.575573   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:55.668142   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:55.964059   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:56.076585   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:56.167897   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:56.464752   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:56.578941   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:56.668188   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:56.964425   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:57.077203   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:57.167672   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:57.464552   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:57.576374   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:57.667935   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:58.132671   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:58.132944   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:58.169688   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:58.465324   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:58.576014   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:58.667346   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:58.964307   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:59.076056   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:59.166874   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:59.464223   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:59.578004   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:59.667622   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:59.965153   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:00.076483   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:00.174660   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:00.464343   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:00.576954   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:00.669473   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:00.965106   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:01.076916   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:01.168011   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:01.464177   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:01.576870   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:01.671081   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:01.965231   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:02.076046   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:02.167944   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:02.464147   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:02.575574   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:02.668275   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:02.964137   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:03.076350   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:03.169491   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:03.464159   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:03.577095   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:03.668414   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:03.965055   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:04.075920   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:04.176112   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:04.464431   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:04.575882   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:04.668732   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:04.964525   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:05.076298   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:05.167993   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:05.465297   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:05.576991   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:05.668278   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:05.964177   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:06.076845   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:06.167744   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:06.631391   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:06.631506   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:06.728889   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:06.965483   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:07.076167   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:07.168216   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:07.464301   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:07.576931   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:07.667124   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:07.964311   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:08.076495   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:08.167830   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:08.465151   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:08.576544   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:08.667969   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:08.964872   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:09.076895   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:09.169140   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:09.464180   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:09.576311   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:09.668546   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:09.965709   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:10.076292   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:10.168497   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:10.464488   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:10.576806   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:10.668500   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:10.964828   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:11.076520   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:11.168656   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:11.464290   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:11.575885   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:11.668104   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:11.964221   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:12.075843   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:12.181821   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:12.465164   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:12.578322   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:12.874814   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:12.964414   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:13.076556   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:13.168204   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:13.463848   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:13.576307   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:13.668020   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:13.964360   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:14.076060   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:14.167456   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:14.542784   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:14.642760   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:14.743049   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:14.963823   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:15.076335   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:15.169564   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:15.465244   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:15.578267   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:15.668721   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:15.964437   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:16.076110   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:16.168160   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:16.464865   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:16.576799   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:16.668307   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:16.964462   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:17.076203   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:17.168440   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:17.778368   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:17.778992   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:17.779213   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:17.964721   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:18.076211   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:18.168401   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:18.473367   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:18.576949   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:18.667061   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:18.964795   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:19.075614   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:19.168336   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:19.464119   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:19.579495   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:19.671073   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:19.964336   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:20.083323   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:20.167496   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:20.465194   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:20.576409   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:20.668105   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:20.964383   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:21.075903   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:21.169554   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:21.464292   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:21.576811   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:21.668814   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:21.965017   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:22.077944   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:22.177503   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:22.465065   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:22.575950   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:22.668167   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:22.964502   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:23.075958   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:23.168490   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:23.465509   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:23.577896   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:23.676531   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:23.965693   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:24.076593   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:24.167985   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:24.463961   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:24.576114   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:24.667144   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:24.965316   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:25.077941   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:25.168382   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:25.467510   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:25.576457   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:25.667499   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:25.964220   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:26.077010   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:26.175890   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:26.465188   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:26.576264   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:26.667767   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:26.979587   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:27.078641   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:27.178422   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:27.464579   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:27.576507   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:27.667477   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:27.964714   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:28.075884   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:28.171105   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:28.754482   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:28.755462   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:28.755669   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:28.964181   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:29.076834   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:29.168360   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:29.464161   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:29.575320   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:29.667957   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:29.973163   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:30.075922   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:30.167736   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:30.465391   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:30.575904   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:30.667576   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:30.964412   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:31.076655   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:31.167436   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:31.464577   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:31.582858   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:31.668250   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:31.964384   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:32.076200   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:32.174015   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:32.463882   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:32.576447   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:32.667804   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:32.974613   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:33.080448   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:33.168339   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:33.470526   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:33.577293   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:33.669995   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:33.964918   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:34.077640   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:34.167474   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:34.560488   15764 kapi.go:107] duration metric: took 1m16.100671823s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0818 18:41:34.660920   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:34.667947   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:35.076989   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:35.176071   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:35.576288   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:35.667504   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:36.076337   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:36.167491   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:36.576006   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:36.669852   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:37.158877   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:37.167693   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:37.577673   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:37.668222   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:38.076777   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:38.168095   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:38.576763   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:38.677323   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:39.081589   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:39.177113   15764 kapi.go:107] duration metric: took 1m17.012942188s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0818 18:41:39.178627   15764 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-483094 cluster.
	I0818 18:41:39.180045   15764 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0818 18:41:39.181691   15764 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0818 18:41:39.577609   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:40.076243   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:40.580091   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:41.076627   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:41.576982   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:42.076793   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:42.577049   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:43.077012   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:43.577544   15764 kapi.go:107] duration metric: took 1m23.006072086s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0818 18:41:43.579293   15764 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, helm-tiller, nvidia-device-plugin, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0818 18:41:43.580573   15764 addons.go:510] duration metric: took 1m33.290007119s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher helm-tiller nvidia-device-plugin metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0818 18:41:43.580608   15764 start.go:246] waiting for cluster config update ...
	I0818 18:41:43.580624   15764 start.go:255] writing updated cluster config ...
	I0818 18:41:43.580854   15764 ssh_runner.go:195] Run: rm -f paused
	I0818 18:41:43.630959   15764 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 18:41:43.632711   15764 out.go:177] * Done! kubectl is now configured to use "addons-483094" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.592188233Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006723592160497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8339252-fc46-481b-91c1-67d90c1403e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.592832831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5bb99d8-3494-4ede-8c40-66d40b03b4c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.592885347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5bb99d8-3494-4ede-8c40-66d40b03b4c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.593147134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c2d29ed42516dfaf00bab1a838c272670f4b671e56dc9e9323ef64f43654e11,PodSandboxId:227d7bf7cfb04a1848a26693acc7cfb1ae1d1bfd8bc273bc2059a8ca55367107,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724006716265043239,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvkpg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff73ba9d4dff434aa03df0d0bdd1b1b0de753556554e7e3335fd082e29c2229,PodSandboxId:0c7f1370497b8da151c2909ee57d9584fd228b10478cd9fabce97d63822dfbd3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724006574730375498,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90ac7d42-930b-44c7-ad80-7da227b904c7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446bc242344d894dca8c05396944e19be7a2d8f25baf1599ca4a184fa0f31a3,PodSandboxId:1596f335b37ca9d956f32ea5453458571d030b3eeea0f83d3afcb2a979492d44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724006507524870724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 416e0a4e-2e7a-40b1-9
4c0-c6346c58a7cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad347e385d558290391d896bdfc3e4865abc458c4aa7cb3d7a5be3ac8293b185,PodSandboxId:5ffaabe43326476598bcb8c1e744c83efca48c233bbc2fc87114680a3001fce2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724006477314272171,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gfzr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3bbb6627-4188-4af8-be9f-7ab6d69dc1cd,},Anno
tations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cdcb0f86abd1e33c1549378579fa46c6ca3833b26e38eabac59d25a347b404,PodSandboxId:00e7172351b19149c088c74dacdffe791da9990d6607ba0a877f18b013d377e4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724006476293709115,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-84jzr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da04e
32b-b203-4df3-b7bd-27ba44c8f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82,PodSandboxId:4eb31a7309e2d9e00d47f9e19e2cf9196209e2e7af5cbaa64cf0c33dd0bf8d89,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724006460612151163,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-77bnz,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 2aab5d03-7625-4a01-841b-830c70fa8ee2,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26,PodSandboxId:87d2727757b886908b73d3e1e4b5e2879f8d122a25bc0c44aa35e09926b74c3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724006417727527371,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5b5ca7-00f4-4361-b31f-7230472ba62f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b,PodSandboxId:e176d7e5b3a19bd795c31c4059d175b5e4852f1268248e6e5338884f49f28183,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724006413989541129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-qghrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad57a4a-3bea-4aae-a41d-7fbabaf0feea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd,PodSandboxId:550f8b6bda1ed09890a07874a4b9eb99f2164b8fbfd9da181ecb6d73d1657b00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724006411666469717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79skb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6eb18e-2c70-4df3-9ea4-f4fe95133083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255,PodSandboxId:8a106fb4afc306d3ad2c6defe51444723f71776e181e97f57260826960ec94ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87
d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724006399842649059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a9117501a7e8b6f167fdb23ec7a923,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341,PodSandboxId:0902496d87924dd1fb68f933449dbdaa8468823e905ea861b33b7833ed1446de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724006399856649917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5358552ce31d2587d5ceaafc457b3494,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637,PodSandboxId:b7c7c626556f444cdfd36625c4520758ad1799e0bda355f0acb81aa999270181,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:C
ONTAINER_RUNNING,CreatedAt:1724006399855025711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb947c5b4b80f32c6c8dfdb9c646073,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326,PodSandboxId:a08e1e720570d8e0df2b88c0c3dc5f8915f6765f22d3d1d70ad01ea15359a661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,
CreatedAt:1724006399678967782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5be7b7eabb1824f09b6daa59a48bc50,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5bb99d8-3494-4ede-8c40-66d40b03b4c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.631574416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb85632b-0963-466b-9825-6fd5c701e594 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.631645646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb85632b-0963-466b-9825-6fd5c701e594 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.632761886Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2088c27e-3175-48f7-9a6e-e14da5bccac9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.633966793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006723633940588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2088c27e-3175-48f7-9a6e-e14da5bccac9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.634582618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a08631b4-41f4-42ab-bb04-8099021aadc4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.634632516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a08631b4-41f4-42ab-bb04-8099021aadc4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.634897810Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c2d29ed42516dfaf00bab1a838c272670f4b671e56dc9e9323ef64f43654e11,PodSandboxId:227d7bf7cfb04a1848a26693acc7cfb1ae1d1bfd8bc273bc2059a8ca55367107,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724006716265043239,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvkpg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff73ba9d4dff434aa03df0d0bdd1b1b0de753556554e7e3335fd082e29c2229,PodSandboxId:0c7f1370497b8da151c2909ee57d9584fd228b10478cd9fabce97d63822dfbd3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724006574730375498,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90ac7d42-930b-44c7-ad80-7da227b904c7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446bc242344d894dca8c05396944e19be7a2d8f25baf1599ca4a184fa0f31a3,PodSandboxId:1596f335b37ca9d956f32ea5453458571d030b3eeea0f83d3afcb2a979492d44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724006507524870724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 416e0a4e-2e7a-40b1-9
4c0-c6346c58a7cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad347e385d558290391d896bdfc3e4865abc458c4aa7cb3d7a5be3ac8293b185,PodSandboxId:5ffaabe43326476598bcb8c1e744c83efca48c233bbc2fc87114680a3001fce2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724006477314272171,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gfzr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3bbb6627-4188-4af8-be9f-7ab6d69dc1cd,},Anno
tations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cdcb0f86abd1e33c1549378579fa46c6ca3833b26e38eabac59d25a347b404,PodSandboxId:00e7172351b19149c088c74dacdffe791da9990d6607ba0a877f18b013d377e4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724006476293709115,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-84jzr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da04e
32b-b203-4df3-b7bd-27ba44c8f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82,PodSandboxId:4eb31a7309e2d9e00d47f9e19e2cf9196209e2e7af5cbaa64cf0c33dd0bf8d89,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724006460612151163,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-77bnz,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 2aab5d03-7625-4a01-841b-830c70fa8ee2,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26,PodSandboxId:87d2727757b886908b73d3e1e4b5e2879f8d122a25bc0c44aa35e09926b74c3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724006417727527371,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5b5ca7-00f4-4361-b31f-7230472ba62f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b,PodSandboxId:e176d7e5b3a19bd795c31c4059d175b5e4852f1268248e6e5338884f49f28183,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724006413989541129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-qghrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad57a4a-3bea-4aae-a41d-7fbabaf0feea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd,PodSandboxId:550f8b6bda1ed09890a07874a4b9eb99f2164b8fbfd9da181ecb6d73d1657b00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724006411666469717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79skb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6eb18e-2c70-4df3-9ea4-f4fe95133083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255,PodSandboxId:8a106fb4afc306d3ad2c6defe51444723f71776e181e97f57260826960ec94ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87
d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724006399842649059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a9117501a7e8b6f167fdb23ec7a923,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341,PodSandboxId:0902496d87924dd1fb68f933449dbdaa8468823e905ea861b33b7833ed1446de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724006399856649917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5358552ce31d2587d5ceaafc457b3494,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637,PodSandboxId:b7c7c626556f444cdfd36625c4520758ad1799e0bda355f0acb81aa999270181,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:C
ONTAINER_RUNNING,CreatedAt:1724006399855025711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb947c5b4b80f32c6c8dfdb9c646073,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326,PodSandboxId:a08e1e720570d8e0df2b88c0c3dc5f8915f6765f22d3d1d70ad01ea15359a661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,
CreatedAt:1724006399678967782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5be7b7eabb1824f09b6daa59a48bc50,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a08631b4-41f4-42ab-bb04-8099021aadc4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.673611611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc45b69f-8093-44ec-813b-85785c78cc55 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.673684368Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc45b69f-8093-44ec-813b-85785c78cc55 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.674818249Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a885efe7-5f29-47ca-bc32-4e358eb95bea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.676355968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006723676328824,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a885efe7-5f29-47ca-bc32-4e358eb95bea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.677135659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27b1b21f-842d-4f7e-872e-04b1736baf05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.677253614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27b1b21f-842d-4f7e-872e-04b1736baf05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.677645618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c2d29ed42516dfaf00bab1a838c272670f4b671e56dc9e9323ef64f43654e11,PodSandboxId:227d7bf7cfb04a1848a26693acc7cfb1ae1d1bfd8bc273bc2059a8ca55367107,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724006716265043239,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvkpg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff73ba9d4dff434aa03df0d0bdd1b1b0de753556554e7e3335fd082e29c2229,PodSandboxId:0c7f1370497b8da151c2909ee57d9584fd228b10478cd9fabce97d63822dfbd3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724006574730375498,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90ac7d42-930b-44c7-ad80-7da227b904c7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446bc242344d894dca8c05396944e19be7a2d8f25baf1599ca4a184fa0f31a3,PodSandboxId:1596f335b37ca9d956f32ea5453458571d030b3eeea0f83d3afcb2a979492d44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724006507524870724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 416e0a4e-2e7a-40b1-9
4c0-c6346c58a7cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad347e385d558290391d896bdfc3e4865abc458c4aa7cb3d7a5be3ac8293b185,PodSandboxId:5ffaabe43326476598bcb8c1e744c83efca48c233bbc2fc87114680a3001fce2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724006477314272171,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gfzr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3bbb6627-4188-4af8-be9f-7ab6d69dc1cd,},Anno
tations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cdcb0f86abd1e33c1549378579fa46c6ca3833b26e38eabac59d25a347b404,PodSandboxId:00e7172351b19149c088c74dacdffe791da9990d6607ba0a877f18b013d377e4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724006476293709115,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-84jzr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da04e
32b-b203-4df3-b7bd-27ba44c8f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82,PodSandboxId:4eb31a7309e2d9e00d47f9e19e2cf9196209e2e7af5cbaa64cf0c33dd0bf8d89,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724006460612151163,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-77bnz,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 2aab5d03-7625-4a01-841b-830c70fa8ee2,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26,PodSandboxId:87d2727757b886908b73d3e1e4b5e2879f8d122a25bc0c44aa35e09926b74c3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724006417727527371,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5b5ca7-00f4-4361-b31f-7230472ba62f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b,PodSandboxId:e176d7e5b3a19bd795c31c4059d175b5e4852f1268248e6e5338884f49f28183,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724006413989541129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-qghrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad57a4a-3bea-4aae-a41d-7fbabaf0feea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd,PodSandboxId:550f8b6bda1ed09890a07874a4b9eb99f2164b8fbfd9da181ecb6d73d1657b00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724006411666469717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79skb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6eb18e-2c70-4df3-9ea4-f4fe95133083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255,PodSandboxId:8a106fb4afc306d3ad2c6defe51444723f71776e181e97f57260826960ec94ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87
d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724006399842649059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a9117501a7e8b6f167fdb23ec7a923,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341,PodSandboxId:0902496d87924dd1fb68f933449dbdaa8468823e905ea861b33b7833ed1446de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724006399856649917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5358552ce31d2587d5ceaafc457b3494,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637,PodSandboxId:b7c7c626556f444cdfd36625c4520758ad1799e0bda355f0acb81aa999270181,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:C
ONTAINER_RUNNING,CreatedAt:1724006399855025711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb947c5b4b80f32c6c8dfdb9c646073,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326,PodSandboxId:a08e1e720570d8e0df2b88c0c3dc5f8915f6765f22d3d1d70ad01ea15359a661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,
CreatedAt:1724006399678967782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5be7b7eabb1824f09b6daa59a48bc50,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27b1b21f-842d-4f7e-872e-04b1736baf05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.717938529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01a61875-1773-40c5-ab51-13e9a0fb3b15 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.718013524Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01a61875-1773-40c5-ab51-13e9a0fb3b15 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.719194595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb434a7f-ec03-4ab8-8159-542d6757116c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.721611412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006723721530881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb434a7f-ec03-4ab8-8159-542d6757116c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.723067921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87045ebf-fa0d-4bbc-9225-1d92accad87a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.723143554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87045ebf-fa0d-4bbc-9225-1d92accad87a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:45:23 addons-483094 crio[679]: time="2024-08-18 18:45:23.723506460Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c2d29ed42516dfaf00bab1a838c272670f4b671e56dc9e9323ef64f43654e11,PodSandboxId:227d7bf7cfb04a1848a26693acc7cfb1ae1d1bfd8bc273bc2059a8ca55367107,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724006716265043239,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvkpg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff73ba9d4dff434aa03df0d0bdd1b1b0de753556554e7e3335fd082e29c2229,PodSandboxId:0c7f1370497b8da151c2909ee57d9584fd228b10478cd9fabce97d63822dfbd3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724006574730375498,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90ac7d42-930b-44c7-ad80-7da227b904c7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446bc242344d894dca8c05396944e19be7a2d8f25baf1599ca4a184fa0f31a3,PodSandboxId:1596f335b37ca9d956f32ea5453458571d030b3eeea0f83d3afcb2a979492d44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724006507524870724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 416e0a4e-2e7a-40b1-9
4c0-c6346c58a7cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad347e385d558290391d896bdfc3e4865abc458c4aa7cb3d7a5be3ac8293b185,PodSandboxId:5ffaabe43326476598bcb8c1e744c83efca48c233bbc2fc87114680a3001fce2,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724006477314272171,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gfzr8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3bbb6627-4188-4af8-be9f-7ab6d69dc1cd,},Anno
tations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0cdcb0f86abd1e33c1549378579fa46c6ca3833b26e38eabac59d25a347b404,PodSandboxId:00e7172351b19149c088c74dacdffe791da9990d6607ba0a877f18b013d377e4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724006476293709115,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-84jzr,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da04e
32b-b203-4df3-b7bd-27ba44c8f7c4,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82,PodSandboxId:4eb31a7309e2d9e00d47f9e19e2cf9196209e2e7af5cbaa64cf0c33dd0bf8d89,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724006460612151163,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-77bnz,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 2aab5d03-7625-4a01-841b-830c70fa8ee2,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26,PodSandboxId:87d2727757b886908b73d3e1e4b5e2879f8d122a25bc0c44aa35e09926b74c3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724006417727527371,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5b5ca7-00f4-4361-b31f-7230472ba62f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b,PodSandboxId:e176d7e5b3a19bd795c31c4059d175b5e4852f1268248e6e5338884f49f28183,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724006413989541129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-6f6b679f8f-qghrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad57a4a-3bea-4aae-a41d-7fbabaf0feea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd,PodSandboxId:550f8b6bda1ed09890a07874a4b9eb99f2164b8fbfd9da181ecb6d73d1657b00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b
2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724006411666469717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79skb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6eb18e-2c70-4df3-9ea4-f4fe95133083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255,PodSandboxId:8a106fb4afc306d3ad2c6defe51444723f71776e181e97f57260826960ec94ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87
d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724006399842649059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a9117501a7e8b6f167fdb23ec7a923,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341,PodSandboxId:0902496d87924dd1fb68f933449dbdaa8468823e905ea861b33b7833ed1446de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048c
c4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724006399856649917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5358552ce31d2587d5ceaafc457b3494,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637,PodSandboxId:b7c7c626556f444cdfd36625c4520758ad1799e0bda355f0acb81aa999270181,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:C
ONTAINER_RUNNING,CreatedAt:1724006399855025711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb947c5b4b80f32c6c8dfdb9c646073,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326,PodSandboxId:a08e1e720570d8e0df2b88c0c3dc5f8915f6765f22d3d1d70ad01ea15359a661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,
CreatedAt:1724006399678967782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5be7b7eabb1824f09b6daa59a48bc50,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87045ebf-fa0d-4bbc-9225-1d92accad87a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c2d29ed42516       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   227d7bf7cfb04       hello-world-app-55bf9c44b4-lvkpg
	aff73ba9d4dff       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   0c7f1370497b8       nginx
	a446bc242344d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   1596f335b37ca       busybox
	ad347e385d558       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             4 minutes ago       Exited              patch                     1                   5ffaabe433264       ingress-nginx-admission-patch-gfzr8
	f0cdcb0f86abd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   4 minutes ago       Exited              create                    0                   00e7172351b19       ingress-nginx-admission-create-84jzr
	7c224479822a8       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        4 minutes ago       Running             metrics-server            0                   4eb31a7309e2d       metrics-server-8988944d9-77bnz
	554234cfc6381       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   87d2727757b88       storage-provisioner
	068521e6cfce6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   e176d7e5b3a19       coredns-6f6b679f8f-qghrl
	2f4a9b244767a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             5 minutes ago       Running             kube-proxy                0                   550f8b6bda1ed       kube-proxy-79skb
	255d041856e84       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   0902496d87924       etcd-addons-483094
	30ace9fbe7263       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             5 minutes ago       Running             kube-apiserver            0                   b7c7c626556f4       kube-apiserver-addons-483094
	ffb1c08f476fe       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             5 minutes ago       Running             kube-controller-manager   0                   8a106fb4afc30       kube-controller-manager-addons-483094
	3e6be4f20fdef       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             5 minutes ago       Running             kube-scheduler            0                   a08e1e720570d       kube-scheduler-addons-483094
	
	
	==> coredns [068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b] <==
	[INFO] 10.244.0.7:48292 - 48904 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00198094s
	[INFO] 10.244.0.7:47738 - 41205 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063438s
	[INFO] 10.244.0.7:47738 - 27126 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090932s
	[INFO] 10.244.0.7:42692 - 14150 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061324s
	[INFO] 10.244.0.7:42692 - 11332 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045669s
	[INFO] 10.244.0.7:52227 - 55459 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000062698s
	[INFO] 10.244.0.7:52227 - 18849 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093155s
	[INFO] 10.244.0.7:49294 - 24259 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000072208s
	[INFO] 10.244.0.7:49294 - 34268 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000039388s
	[INFO] 10.244.0.7:39873 - 23137 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007882s
	[INFO] 10.244.0.7:39873 - 45668 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072443s
	[INFO] 10.244.0.7:48201 - 31771 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000029199s
	[INFO] 10.244.0.7:48201 - 27929 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091981s
	[INFO] 10.244.0.7:35602 - 25551 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041838s
	[INFO] 10.244.0.7:35602 - 18889 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037474s
	[INFO] 10.244.0.22:53928 - 18241 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000409347s
	[INFO] 10.244.0.22:35985 - 332 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072186s
	[INFO] 10.244.0.22:46878 - 4684 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108381s
	[INFO] 10.244.0.22:40624 - 28689 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000069645s
	[INFO] 10.244.0.22:52391 - 7396 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012731s
	[INFO] 10.244.0.22:53157 - 12726 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088011s
	[INFO] 10.244.0.22:34132 - 14793 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000551582s
	[INFO] 10.244.0.22:43725 - 58678 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000638649s
	[INFO] 10.244.0.26:60064 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000444713s
	[INFO] 10.244.0.26:39631 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010703s
	
	
	==> describe nodes <==
	Name:               addons-483094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-483094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=addons-483094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T18_40_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-483094
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:40:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-483094
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 18:45:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 18:43:09 +0000   Sun, 18 Aug 2024 18:40:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 18:43:09 +0000   Sun, 18 Aug 2024 18:40:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 18:43:09 +0000   Sun, 18 Aug 2024 18:40:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 18:43:09 +0000   Sun, 18 Aug 2024 18:40:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    addons-483094
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c624aa30b43d468f84f32a98bc7e0ee9
	  System UUID:                c624aa30-b43d-468f-84f3-2a98bc7e0ee9
	  Boot ID:                    7d57615f-d260-4095-8c5f-a74965ea1b0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  default                     hello-world-app-55bf9c44b4-lvkpg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 coredns-6f6b679f8f-qghrl                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m13s
	  kube-system                 etcd-addons-483094                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m18s
	  kube-system                 kube-apiserver-addons-483094             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-addons-483094    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-79skb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-scheduler-addons-483094             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 metrics-server-8988944d9-77bnz           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m8s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m11s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m25s)  kubelet          Node addons-483094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m25s)  kubelet          Node addons-483094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m25s)  kubelet          Node addons-483094 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m18s                  kubelet          Node addons-483094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s                  kubelet          Node addons-483094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s                  kubelet          Node addons-483094 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m17s                  kubelet          Node addons-483094 status is now: NodeReady
	  Normal  RegisteredNode           5m14s                  node-controller  Node addons-483094 event: Registered Node addons-483094 in Controller
	
	
	==> dmesg <==
	[  +5.633595] kauditd_printk_skb: 133 callbacks suppressed
	[  +6.352680] kauditd_printk_skb: 74 callbacks suppressed
	[ +27.226948] kauditd_printk_skb: 4 callbacks suppressed
	[Aug18 18:41] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.713600] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.292546] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.044806] kauditd_printk_skb: 48 callbacks suppressed
	[  +7.934926] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.319163] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.728430] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.371445] kauditd_printk_skb: 45 callbacks suppressed
	[Aug18 18:42] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.820558] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.209896] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.027760] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.128333] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.040213] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.045408] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.433523] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.196847] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.407353] kauditd_printk_skb: 17 callbacks suppressed
	[Aug18 18:43] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.356618] kauditd_printk_skb: 33 callbacks suppressed
	[Aug18 18:45] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.504785] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341] <==
	{"level":"info","ts":"2024-08-18T18:41:17.760422Z","caller":"traceutil/trace.go:171","msg":"trace[852162146] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"308.010897ms","start":"2024-08-18T18:41:17.452400Z","end":"2024-08-18T18:41:17.760411Z","steps":["trace[852162146] 'range keys from in-memory index tree'  (duration: 307.857731ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:17.760449Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T18:41:17.452368Z","time spent":"308.073687ms","remote":"127.0.0.1:37098","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-18T18:41:17.760619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.849494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:1823"}
	{"level":"info","ts":"2024-08-18T18:41:17.760658Z","caller":"traceutil/trace.go:171","msg":"trace[403678913] range","detail":"{range_begin:/registry/secrets/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:1032; }","duration":"305.888818ms","start":"2024-08-18T18:41:17.454762Z","end":"2024-08-18T18:41:17.760651Z","steps":["trace[403678913] 'range keys from in-memory index tree'  (duration: 305.751083ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:17.760682Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T18:41:17.454733Z","time spent":"305.945202ms","remote":"127.0.0.1:37024","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":1847,"request content":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" "}
	{"level":"warn","ts":"2024-08-18T18:41:17.760924Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.309915ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-gfzr8.17ece6c1ab10df48\" ","response":"range_response_count:1 size:783"}
	{"level":"info","ts":"2024-08-18T18:41:17.760975Z","caller":"traceutil/trace.go:171","msg":"trace[161565522] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-admission-patch-gfzr8.17ece6c1ab10df48; range_end:; response_count:1; response_revision:1032; }","duration":"241.360645ms","start":"2024-08-18T18:41:17.519605Z","end":"2024-08-18T18:41:17.760965Z","steps":["trace[161565522] 'range keys from in-memory index tree'  (duration: 241.129175ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:17.761080Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.713355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:41:17.761114Z","caller":"traceutil/trace.go:171","msg":"trace[565680782] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"196.749319ms","start":"2024-08-18T18:41:17.564360Z","end":"2024-08-18T18:41:17.761109Z","steps":["trace[565680782] 'range keys from in-memory index tree'  (duration: 196.651111ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:17.761622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.079197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:41:17.761649Z","caller":"traceutil/trace.go:171","msg":"trace[436122815] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"105.109395ms","start":"2024-08-18T18:41:17.656532Z","end":"2024-08-18T18:41:17.761641Z","steps":["trace[436122815] 'range keys from in-memory index tree'  (duration: 105.029605ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:28.736449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.561385ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:41:28.737257Z","caller":"traceutil/trace.go:171","msg":"trace[1647100959] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1122; }","duration":"173.333674ms","start":"2024-08-18T18:41:28.563866Z","end":"2024-08-18T18:41:28.737200Z","steps":["trace[1647100959] 'range keys from in-memory index tree'  (duration: 172.517087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:28.736619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.643525ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:41:28.737384Z","caller":"traceutil/trace.go:171","msg":"trace[148679581] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1122; }","duration":"285.419489ms","start":"2024-08-18T18:41:28.451960Z","end":"2024-08-18T18:41:28.737379Z","steps":["trace[148679581] 'range keys from in-memory index tree'  (duration: 284.554286ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:28.736785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.178223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-08-18T18:41:28.737512Z","caller":"traceutil/trace.go:171","msg":"trace[1213203713] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1122; }","duration":"164.906881ms","start":"2024-08-18T18:41:28.572600Z","end":"2024-08-18T18:41:28.737507Z","steps":["trace[1213203713] 'range keys from in-memory index tree'  (duration: 164.088648ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:41:37.144465Z","caller":"traceutil/trace.go:171","msg":"trace[356874224] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"116.877275ms","start":"2024-08-18T18:41:37.027574Z","end":"2024-08-18T18:41:37.144452Z","steps":["trace[356874224] 'process raft request'  (duration: 116.526044ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:42:26.947756Z","caller":"traceutil/trace.go:171","msg":"trace[126730107] transaction","detail":"{read_only:false; response_revision:1540; number_of_response:1; }","duration":"393.26166ms","start":"2024-08-18T18:42:26.554465Z","end":"2024-08-18T18:42:26.947726Z","steps":["trace[126730107] 'process raft request'  (duration: 392.977838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:42:26.947981Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T18:42:26.554447Z","time spent":"393.405303ms","remote":"127.0.0.1:37154","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-nnk7ey7brwatdk3wj4ugvhz7wi\" mod_revision:1401 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-nnk7ey7brwatdk3wj4ugvhz7wi\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-nnk7ey7brwatdk3wj4ugvhz7wi\" > >"}
	{"level":"info","ts":"2024-08-18T18:42:45.584135Z","caller":"traceutil/trace.go:171","msg":"trace[1559525544] linearizableReadLoop","detail":"{readStateIndex:1719; appliedIndex:1718; }","duration":"289.679135ms","start":"2024-08-18T18:42:45.294430Z","end":"2024-08-18T18:42:45.584109Z","steps":["trace[1559525544] 'read index received'  (duration: 289.479438ms)","trace[1559525544] 'applied index is now lower than readState.Index'  (duration: 199.202µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T18:42:45.585409Z","caller":"traceutil/trace.go:171","msg":"trace[728233129] transaction","detail":"{read_only:false; response_revision:1662; number_of_response:1; }","duration":"352.683922ms","start":"2024-08-18T18:42:45.232708Z","end":"2024-08-18T18:42:45.585392Z","steps":["trace[728233129] 'process raft request'  (duration: 351.263269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:42:45.585520Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T18:42:45.232691Z","time spent":"352.771605ms","remote":"127.0.0.1:37098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":10117,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-zx5jn\" mod_revision:1659 > success:<request_put:<key:\"/registry/pods/gadget/gadget-zx5jn\" value_size:10075 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-zx5jn\" > >"}
	{"level":"warn","ts":"2024-08-18T18:42:45.586034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.59923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-08-18T18:42:45.586094Z","caller":"traceutil/trace.go:171","msg":"trace[1730483542] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1662; }","duration":"291.657059ms","start":"2024-08-18T18:42:45.294425Z","end":"2024-08-18T18:42:45.586082Z","steps":["trace[1730483542] 'agreement among raft nodes before linearized reading'  (duration: 291.527783ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:45:24 up 5 min,  0 users,  load average: 0.24, 0.88, 0.52
	Linux addons-483094 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637] <==
	E0818 18:42:06.555357       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.4.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.4.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.4.186:443: connect: connection refused" logger="UnhandledError"
	E0818 18:42:06.561072       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.4.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.4.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.4.186:443: connect: connection refused" logger="UnhandledError"
	I0818 18:42:06.636915       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0818 18:42:21.034507       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.190.186"}
	E0818 18:42:36.342822       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0818 18:42:44.504777       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0818 18:42:45.558951       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0818 18:42:49.983965       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0818 18:42:50.185654       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.200.243"}
	I0818 18:42:55.609512       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0818 18:43:29.144860       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.154122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0818 18:43:29.179025       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.179084       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0818 18:43:29.194278       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.194333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0818 18:43:29.298255       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.298303       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0818 18:43:29.316795       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.316846       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0818 18:43:30.299293       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0818 18:43:30.318053       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0818 18:43:30.323444       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0818 18:45:13.316976       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.146.166"}
	E0818 18:45:15.888653       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255] <==
	W0818 18:44:10.842193       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:44:10.842331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:44:12.040981       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:44:12.041093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:44:36.879183       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:44:36.879443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:44:49.003466       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:44:49.003633       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:44:53.719854       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:44:53.719999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:45:00.631621       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:45:00.631786       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0818 18:45:13.157188       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.329006ms"
	I0818 18:45:13.189084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.776553ms"
	I0818 18:45:13.212635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="23.459427ms"
	I0818 18:45:13.212724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.169µs"
	I0818 18:45:15.784073       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0818 18:45:15.793144       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0818 18:45:15.795574       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="4.145µs"
	I0818 18:45:17.196267       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.636137ms"
	I0818 18:45:17.196350       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.411µs"
	W0818 18:45:17.972424       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:45:17.972532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:45:19.550902       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:45:19.551014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:40:12.452866       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:40:12.482550       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.116"]
	E0818 18:40:12.482650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:40:12.662688       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:40:12.662722       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:40:12.662747       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:40:12.693363       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:40:12.693599       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:40:12.693609       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:40:12.697820       1 config.go:197] "Starting service config controller"
	I0818 18:40:12.697835       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:40:12.697858       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:40:12.697861       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:40:12.705509       1 config.go:326] "Starting node config controller"
	I0818 18:40:12.705520       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:40:12.798349       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 18:40:12.798421       1 shared_informer.go:320] Caches are synced for service config
	I0818 18:40:12.805799       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326] <==
	E0818 18:40:02.748262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0818 18:40:02.748272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:02.747665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 18:40:02.748377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:02.748460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 18:40:02.748486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:02.748526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 18:40:02.748565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.607413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 18:40:03.607470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.648418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 18:40:03.648470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.675587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0818 18:40:03.675605       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 18:40:03.675858       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0818 18:40:03.675726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.753930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 18:40:03.753995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.841479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 18:40:03.841544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.917118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 18:40:03.917175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.970769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 18:40:03.970834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0818 18:40:06.837458       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 18:45:13 addons-483094 kubelet[1236]: I0818 18:45:13.158946    1236 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cc6e122-f0b7-48f4-a9f4-f34bcb84c3d2" containerName="volume-snapshot-controller"
	Aug 18 18:45:13 addons-483094 kubelet[1236]: I0818 18:45:13.205892    1236 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttj6q\" (UniqueName: \"kubernetes.io/projected/b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c-kube-api-access-ttj6q\") pod \"hello-world-app-55bf9c44b4-lvkpg\" (UID: \"b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c\") " pod="default/hello-world-app-55bf9c44b4-lvkpg"
	Aug 18 18:45:14 addons-483094 kubelet[1236]: I0818 18:45:14.314549    1236 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs5zt\" (UniqueName: \"kubernetes.io/projected/7cdd6d54-a545-4f73-8e7b-95fa3aedf907-kube-api-access-fs5zt\") pod \"7cdd6d54-a545-4f73-8e7b-95fa3aedf907\" (UID: \"7cdd6d54-a545-4f73-8e7b-95fa3aedf907\") "
	Aug 18 18:45:14 addons-483094 kubelet[1236]: I0818 18:45:14.316780    1236 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdd6d54-a545-4f73-8e7b-95fa3aedf907-kube-api-access-fs5zt" (OuterVolumeSpecName: "kube-api-access-fs5zt") pod "7cdd6d54-a545-4f73-8e7b-95fa3aedf907" (UID: "7cdd6d54-a545-4f73-8e7b-95fa3aedf907"). InnerVolumeSpecName "kube-api-access-fs5zt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 18 18:45:14 addons-483094 kubelet[1236]: I0818 18:45:14.415294    1236 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fs5zt\" (UniqueName: \"kubernetes.io/projected/7cdd6d54-a545-4f73-8e7b-95fa3aedf907-kube-api-access-fs5zt\") on node \"addons-483094\" DevicePath \"\""
	Aug 18 18:45:15 addons-483094 kubelet[1236]: I0818 18:45:15.157057    1236 scope.go:117] "RemoveContainer" containerID="ea059d92051835d37ba726c217f2f1845fe7dfaf3a0ab2650924aab1039210e4"
	Aug 18 18:45:15 addons-483094 kubelet[1236]: I0818 18:45:15.178401    1236 scope.go:117] "RemoveContainer" containerID="ea059d92051835d37ba726c217f2f1845fe7dfaf3a0ab2650924aab1039210e4"
	Aug 18 18:45:15 addons-483094 kubelet[1236]: E0818 18:45:15.181783    1236 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea059d92051835d37ba726c217f2f1845fe7dfaf3a0ab2650924aab1039210e4\": container with ID starting with ea059d92051835d37ba726c217f2f1845fe7dfaf3a0ab2650924aab1039210e4 not found: ID does not exist" containerID="ea059d92051835d37ba726c217f2f1845fe7dfaf3a0ab2650924aab1039210e4"
	Aug 18 18:45:15 addons-483094 kubelet[1236]: I0818 18:45:15.182043    1236 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea059d92051835d37ba726c217f2f1845fe7dfaf3a0ab2650924aab1039210e4"} err="failed to get container status \"ea059d92051835d37ba726c217f2f1845fe7dfaf3a0ab2650924aab1039210e4\": rpc error: code = NotFound desc = could not find container \"ea059d92051835d37ba726c217f2f1845fe7dfaf3a0ab2650924aab1039210e4\": container with ID starting with ea059d92051835d37ba726c217f2f1845fe7dfaf3a0ab2650924aab1039210e4 not found: ID does not exist"
	Aug 18 18:45:15 addons-483094 kubelet[1236]: I0818 18:45:15.368366    1236 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cdd6d54-a545-4f73-8e7b-95fa3aedf907" path="/var/lib/kubelet/pods/7cdd6d54-a545-4f73-8e7b-95fa3aedf907/volumes"
	Aug 18 18:45:15 addons-483094 kubelet[1236]: E0818 18:45:15.605944    1236 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006715604119280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585117,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:45:15 addons-483094 kubelet[1236]: E0818 18:45:15.605986    1236 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006715604119280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585117,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:45:17 addons-483094 kubelet[1236]: I0818 18:45:17.368514    1236 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bbb6627-4188-4af8-be9f-7ab6d69dc1cd" path="/var/lib/kubelet/pods/3bbb6627-4188-4af8-be9f-7ab6d69dc1cd/volumes"
	Aug 18 18:45:17 addons-483094 kubelet[1236]: I0818 18:45:17.368955    1236 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da04e32b-b203-4df3-b7bd-27ba44c8f7c4" path="/var/lib/kubelet/pods/da04e32b-b203-4df3-b7bd-27ba44c8f7c4/volumes"
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.049738    1236 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj8fh\" (UniqueName: \"kubernetes.io/projected/ca02ca4a-e756-48cb-ae28-3058162c088e-kube-api-access-nj8fh\") pod \"ca02ca4a-e756-48cb-ae28-3058162c088e\" (UID: \"ca02ca4a-e756-48cb-ae28-3058162c088e\") "
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.049780    1236 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca02ca4a-e756-48cb-ae28-3058162c088e-webhook-cert\") pod \"ca02ca4a-e756-48cb-ae28-3058162c088e\" (UID: \"ca02ca4a-e756-48cb-ae28-3058162c088e\") "
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.052032    1236 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca02ca4a-e756-48cb-ae28-3058162c088e-kube-api-access-nj8fh" (OuterVolumeSpecName: "kube-api-access-nj8fh") pod "ca02ca4a-e756-48cb-ae28-3058162c088e" (UID: "ca02ca4a-e756-48cb-ae28-3058162c088e"). InnerVolumeSpecName "kube-api-access-nj8fh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.053102    1236 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca02ca4a-e756-48cb-ae28-3058162c088e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ca02ca4a-e756-48cb-ae28-3058162c088e" (UID: "ca02ca4a-e756-48cb-ae28-3058162c088e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.150669    1236 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nj8fh\" (UniqueName: \"kubernetes.io/projected/ca02ca4a-e756-48cb-ae28-3058162c088e-kube-api-access-nj8fh\") on node \"addons-483094\" DevicePath \"\""
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.150696    1236 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca02ca4a-e756-48cb-ae28-3058162c088e-webhook-cert\") on node \"addons-483094\" DevicePath \"\""
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.180656    1236 scope.go:117] "RemoveContainer" containerID="00ff629f0a90d1a2151538fe73a10fb2c9cbf7abef9ff8de47652a0986ca6043"
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.205538    1236 scope.go:117] "RemoveContainer" containerID="00ff629f0a90d1a2151538fe73a10fb2c9cbf7abef9ff8de47652a0986ca6043"
	Aug 18 18:45:19 addons-483094 kubelet[1236]: E0818 18:45:19.206064    1236 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ff629f0a90d1a2151538fe73a10fb2c9cbf7abef9ff8de47652a0986ca6043\": container with ID starting with 00ff629f0a90d1a2151538fe73a10fb2c9cbf7abef9ff8de47652a0986ca6043 not found: ID does not exist" containerID="00ff629f0a90d1a2151538fe73a10fb2c9cbf7abef9ff8de47652a0986ca6043"
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.206148    1236 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ff629f0a90d1a2151538fe73a10fb2c9cbf7abef9ff8de47652a0986ca6043"} err="failed to get container status \"00ff629f0a90d1a2151538fe73a10fb2c9cbf7abef9ff8de47652a0986ca6043\": rpc error: code = NotFound desc = could not find container \"00ff629f0a90d1a2151538fe73a10fb2c9cbf7abef9ff8de47652a0986ca6043\": container with ID starting with 00ff629f0a90d1a2151538fe73a10fb2c9cbf7abef9ff8de47652a0986ca6043 not found: ID does not exist"
	Aug 18 18:45:19 addons-483094 kubelet[1236]: I0818 18:45:19.370375    1236 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca02ca4a-e756-48cb-ae28-3058162c088e" path="/var/lib/kubelet/pods/ca02ca4a-e756-48cb-ae28-3058162c088e/volumes"
	
	
	==> storage-provisioner [554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26] <==
	I0818 18:40:19.152194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 18:40:19.169287       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 18:40:19.169388       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 18:40:19.207059       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 18:40:19.209496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-483094_1a00a840-75be-41ec-aa14-5fa0dc2b943c!
	I0818 18:40:19.227319       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d92a9c-391b-4d52-87c5-30c4c554cd9b", APIVersion:"v1", ResourceVersion:"758", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-483094_1a00a840-75be-41ec-aa14-5fa0dc2b943c became leader
	I0818 18:40:19.309848       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-483094_1a00a840-75be-41ec-aa14-5fa0dc2b943c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-483094 -n addons-483094
helpers_test.go:261: (dbg) Run:  kubectl --context addons-483094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.04s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (327.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.253711ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-77bnz" [2aab5d03-7625-4a01-841b-830c70fa8ee2] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.06243925s
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (97.897475ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 2m34.72870889s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (63.297739ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 2m39.211611492s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (79.815272ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 2m43.704261175s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (65.409435ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 2m52.574387325s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (61.520505ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 3m7.272794139s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (61.648118ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 3m18.002774586s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (59.623868ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 3m37.299189009s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (62.370607ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 4m3.948282934s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (63.897731ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 5m10.493025901s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (63.452027ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 6m27.442941501s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-483094 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-483094 top pods -n kube-system: exit status 1 (63.477839ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-qghrl, age: 7m53.77498641s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-483094 -n addons-483094
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-483094 logs -n 25: (1.343007313s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-371992                                                                     | download-only-371992 | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC | 18 Aug 24 18:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-128446 | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC |                     |
	|         | binary-mirror-128446                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41387                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-128446                                                                     | binary-mirror-128446 | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC | 18 Aug 24 18:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC |                     |
	|         | addons-483094                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC |                     |
	|         | addons-483094                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-483094 --wait=true                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC | 18 Aug 24 18:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:41 UTC | 18 Aug 24 18:42 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | -p addons-483094                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-483094 ssh cat                                                                       | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | /opt/local-path-provisioner/pvc-512d0e6d-7527-4406-847a-81e42c2ab4b4_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | addons-483094                                                                               |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:43 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | -p addons-483094                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-483094 ip                                                                            | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:42 UTC | 18 Aug 24 18:42 UTC |
	|         | addons-483094                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-483094 ssh curl -s                                                                   | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:43 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-483094 addons                                                                        | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:43 UTC | 18 Aug 24 18:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-483094 addons                                                                        | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:43 UTC | 18 Aug 24 18:43 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-483094 ip                                                                            | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:45 UTC | 18 Aug 24 18:45 UTC |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:45 UTC | 18 Aug 24 18:45 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-483094 addons disable                                                                | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:45 UTC | 18 Aug 24 18:45 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-483094 addons                                                                        | addons-483094        | jenkins | v1.33.1 | 18 Aug 24 18:48 UTC | 18 Aug 24 18:48 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:39:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:39:23.638738   15764 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:39:23.638995   15764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:39:23.639004   15764 out.go:358] Setting ErrFile to fd 2...
	I0818 18:39:23.639009   15764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:39:23.639167   15764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 18:39:23.639762   15764 out.go:352] Setting JSON to false
	I0818 18:39:23.640517   15764 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1308,"bootTime":1724005056,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:39:23.640566   15764 start.go:139] virtualization: kvm guest
	I0818 18:39:23.642597   15764 out.go:177] * [addons-483094] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 18:39:23.644246   15764 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:39:23.644248   15764 notify.go:220] Checking for updates...
	I0818 18:39:23.646745   15764 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:39:23.647924   15764 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:39:23.649094   15764 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:39:23.650228   15764 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 18:39:23.651472   15764 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:39:23.652807   15764 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:39:23.684403   15764 out.go:177] * Using the kvm2 driver based on user configuration
	I0818 18:39:23.685479   15764 start.go:297] selected driver: kvm2
	I0818 18:39:23.685496   15764 start.go:901] validating driver "kvm2" against <nil>
	I0818 18:39:23.685506   15764 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:39:23.686184   15764 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:39:23.686275   15764 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 18:39:23.701233   15764 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 18:39:23.701274   15764 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:39:23.701466   15764 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:39:23.701527   15764 cni.go:84] Creating CNI manager for ""
	I0818 18:39:23.701536   15764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 18:39:23.701549   15764 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 18:39:23.701600   15764 start.go:340] cluster config:
	{Name:addons-483094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:39:23.701704   15764 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:39:23.703620   15764 out.go:177] * Starting "addons-483094" primary control-plane node in "addons-483094" cluster
	I0818 18:39:23.704902   15764 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:39:23.704938   15764 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 18:39:23.704947   15764 cache.go:56] Caching tarball of preloaded images
	I0818 18:39:23.705012   15764 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 18:39:23.705022   15764 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 18:39:23.705334   15764 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/config.json ...
	I0818 18:39:23.705351   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/config.json: {Name:mkc1f748b6b929ccbaa374580668e65846b66e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:23.705489   15764 start.go:360] acquireMachinesLock for addons-483094: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 18:39:23.705534   15764 start.go:364] duration metric: took 30.356µs to acquireMachinesLock for "addons-483094"
	I0818 18:39:23.705558   15764 start.go:93] Provisioning new machine with config: &{Name:addons-483094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:39:23.705615   15764 start.go:125] createHost starting for "" (driver="kvm2")
	I0818 18:39:23.707168   15764 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0818 18:39:23.707289   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:39:23.707328   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:39:23.721606   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45945
	I0818 18:39:23.722016   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:39:23.722592   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:39:23.722617   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:39:23.722991   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:39:23.723208   15764 main.go:141] libmachine: (addons-483094) Calling .GetMachineName
	I0818 18:39:23.723355   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:23.723549   15764 start.go:159] libmachine.API.Create for "addons-483094" (driver="kvm2")
	I0818 18:39:23.723575   15764 client.go:168] LocalClient.Create starting
	I0818 18:39:23.723612   15764 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 18:39:23.795179   15764 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 18:39:23.881029   15764 main.go:141] libmachine: Running pre-create checks...
	I0818 18:39:23.881052   15764 main.go:141] libmachine: (addons-483094) Calling .PreCreateCheck
	I0818 18:39:23.881553   15764 main.go:141] libmachine: (addons-483094) Calling .GetConfigRaw
	I0818 18:39:23.881930   15764 main.go:141] libmachine: Creating machine...
	I0818 18:39:23.881944   15764 main.go:141] libmachine: (addons-483094) Calling .Create
	I0818 18:39:23.882095   15764 main.go:141] libmachine: (addons-483094) Creating KVM machine...
	I0818 18:39:23.883649   15764 main.go:141] libmachine: (addons-483094) DBG | found existing default KVM network
	I0818 18:39:23.884343   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:23.884197   15786 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0818 18:39:23.884368   15764 main.go:141] libmachine: (addons-483094) DBG | created network xml: 
	I0818 18:39:23.884378   15764 main.go:141] libmachine: (addons-483094) DBG | <network>
	I0818 18:39:23.884390   15764 main.go:141] libmachine: (addons-483094) DBG |   <name>mk-addons-483094</name>
	I0818 18:39:23.884396   15764 main.go:141] libmachine: (addons-483094) DBG |   <dns enable='no'/>
	I0818 18:39:23.884402   15764 main.go:141] libmachine: (addons-483094) DBG |   
	I0818 18:39:23.884409   15764 main.go:141] libmachine: (addons-483094) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0818 18:39:23.884419   15764 main.go:141] libmachine: (addons-483094) DBG |     <dhcp>
	I0818 18:39:23.884428   15764 main.go:141] libmachine: (addons-483094) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0818 18:39:23.884438   15764 main.go:141] libmachine: (addons-483094) DBG |     </dhcp>
	I0818 18:39:23.884446   15764 main.go:141] libmachine: (addons-483094) DBG |   </ip>
	I0818 18:39:23.884458   15764 main.go:141] libmachine: (addons-483094) DBG |   
	I0818 18:39:23.884465   15764 main.go:141] libmachine: (addons-483094) DBG | </network>
	I0818 18:39:23.884475   15764 main.go:141] libmachine: (addons-483094) DBG | 
	I0818 18:39:23.889949   15764 main.go:141] libmachine: (addons-483094) DBG | trying to create private KVM network mk-addons-483094 192.168.39.0/24...
	I0818 18:39:23.953191   15764 main.go:141] libmachine: (addons-483094) DBG | private KVM network mk-addons-483094 192.168.39.0/24 created
	I0818 18:39:23.953217   15764 main.go:141] libmachine: (addons-483094) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094 ...
	I0818 18:39:23.953231   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:23.953184   15786 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:39:23.953252   15764 main.go:141] libmachine: (addons-483094) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:39:23.953305   15764 main.go:141] libmachine: (addons-483094) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 18:39:24.225940   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:24.225806   15786 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa...
	I0818 18:39:24.405855   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:24.405746   15786 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/addons-483094.rawdisk...
	I0818 18:39:24.405879   15764 main.go:141] libmachine: (addons-483094) DBG | Writing magic tar header
	I0818 18:39:24.405890   15764 main.go:141] libmachine: (addons-483094) DBG | Writing SSH key tar header
	I0818 18:39:24.405897   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:24.405861   15786 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094 ...
	I0818 18:39:24.406017   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094
	I0818 18:39:24.406039   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 18:39:24.406051   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094 (perms=drwx------)
	I0818 18:39:24.406064   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 18:39:24.406074   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 18:39:24.406085   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 18:39:24.406093   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 18:39:24.406101   15764 main.go:141] libmachine: (addons-483094) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 18:39:24.406112   15764 main.go:141] libmachine: (addons-483094) Creating domain...
	I0818 18:39:24.406119   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:39:24.406139   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 18:39:24.406148   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 18:39:24.406160   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home/jenkins
	I0818 18:39:24.406170   15764 main.go:141] libmachine: (addons-483094) DBG | Checking permissions on dir: /home
	I0818 18:39:24.406181   15764 main.go:141] libmachine: (addons-483094) DBG | Skipping /home - not owner
	I0818 18:39:24.407035   15764 main.go:141] libmachine: (addons-483094) define libvirt domain using xml: 
	I0818 18:39:24.407048   15764 main.go:141] libmachine: (addons-483094) <domain type='kvm'>
	I0818 18:39:24.407054   15764 main.go:141] libmachine: (addons-483094)   <name>addons-483094</name>
	I0818 18:39:24.407059   15764 main.go:141] libmachine: (addons-483094)   <memory unit='MiB'>4000</memory>
	I0818 18:39:24.407064   15764 main.go:141] libmachine: (addons-483094)   <vcpu>2</vcpu>
	I0818 18:39:24.407068   15764 main.go:141] libmachine: (addons-483094)   <features>
	I0818 18:39:24.407073   15764 main.go:141] libmachine: (addons-483094)     <acpi/>
	I0818 18:39:24.407078   15764 main.go:141] libmachine: (addons-483094)     <apic/>
	I0818 18:39:24.407083   15764 main.go:141] libmachine: (addons-483094)     <pae/>
	I0818 18:39:24.407087   15764 main.go:141] libmachine: (addons-483094)     
	I0818 18:39:24.407092   15764 main.go:141] libmachine: (addons-483094)   </features>
	I0818 18:39:24.407098   15764 main.go:141] libmachine: (addons-483094)   <cpu mode='host-passthrough'>
	I0818 18:39:24.407103   15764 main.go:141] libmachine: (addons-483094)   
	I0818 18:39:24.407113   15764 main.go:141] libmachine: (addons-483094)   </cpu>
	I0818 18:39:24.407122   15764 main.go:141] libmachine: (addons-483094)   <os>
	I0818 18:39:24.407126   15764 main.go:141] libmachine: (addons-483094)     <type>hvm</type>
	I0818 18:39:24.407131   15764 main.go:141] libmachine: (addons-483094)     <boot dev='cdrom'/>
	I0818 18:39:24.407135   15764 main.go:141] libmachine: (addons-483094)     <boot dev='hd'/>
	I0818 18:39:24.407140   15764 main.go:141] libmachine: (addons-483094)     <bootmenu enable='no'/>
	I0818 18:39:24.407152   15764 main.go:141] libmachine: (addons-483094)   </os>
	I0818 18:39:24.407157   15764 main.go:141] libmachine: (addons-483094)   <devices>
	I0818 18:39:24.407162   15764 main.go:141] libmachine: (addons-483094)     <disk type='file' device='cdrom'>
	I0818 18:39:24.407169   15764 main.go:141] libmachine: (addons-483094)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/boot2docker.iso'/>
	I0818 18:39:24.407175   15764 main.go:141] libmachine: (addons-483094)       <target dev='hdc' bus='scsi'/>
	I0818 18:39:24.407180   15764 main.go:141] libmachine: (addons-483094)       <readonly/>
	I0818 18:39:24.407184   15764 main.go:141] libmachine: (addons-483094)     </disk>
	I0818 18:39:24.407190   15764 main.go:141] libmachine: (addons-483094)     <disk type='file' device='disk'>
	I0818 18:39:24.407203   15764 main.go:141] libmachine: (addons-483094)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 18:39:24.407213   15764 main.go:141] libmachine: (addons-483094)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/addons-483094.rawdisk'/>
	I0818 18:39:24.407224   15764 main.go:141] libmachine: (addons-483094)       <target dev='hda' bus='virtio'/>
	I0818 18:39:24.407251   15764 main.go:141] libmachine: (addons-483094)     </disk>
	I0818 18:39:24.407271   15764 main.go:141] libmachine: (addons-483094)     <interface type='network'>
	I0818 18:39:24.407288   15764 main.go:141] libmachine: (addons-483094)       <source network='mk-addons-483094'/>
	I0818 18:39:24.407304   15764 main.go:141] libmachine: (addons-483094)       <model type='virtio'/>
	I0818 18:39:24.407326   15764 main.go:141] libmachine: (addons-483094)     </interface>
	I0818 18:39:24.407346   15764 main.go:141] libmachine: (addons-483094)     <interface type='network'>
	I0818 18:39:24.407359   15764 main.go:141] libmachine: (addons-483094)       <source network='default'/>
	I0818 18:39:24.407368   15764 main.go:141] libmachine: (addons-483094)       <model type='virtio'/>
	I0818 18:39:24.407375   15764 main.go:141] libmachine: (addons-483094)     </interface>
	I0818 18:39:24.407406   15764 main.go:141] libmachine: (addons-483094)     <serial type='pty'>
	I0818 18:39:24.407419   15764 main.go:141] libmachine: (addons-483094)       <target port='0'/>
	I0818 18:39:24.407428   15764 main.go:141] libmachine: (addons-483094)     </serial>
	I0818 18:39:24.407437   15764 main.go:141] libmachine: (addons-483094)     <console type='pty'>
	I0818 18:39:24.407453   15764 main.go:141] libmachine: (addons-483094)       <target type='serial' port='0'/>
	I0818 18:39:24.407465   15764 main.go:141] libmachine: (addons-483094)     </console>
	I0818 18:39:24.407481   15764 main.go:141] libmachine: (addons-483094)     <rng model='virtio'>
	I0818 18:39:24.407509   15764 main.go:141] libmachine: (addons-483094)       <backend model='random'>/dev/random</backend>
	I0818 18:39:24.407529   15764 main.go:141] libmachine: (addons-483094)     </rng>
	I0818 18:39:24.407538   15764 main.go:141] libmachine: (addons-483094)     
	I0818 18:39:24.407547   15764 main.go:141] libmachine: (addons-483094)     
	I0818 18:39:24.407557   15764 main.go:141] libmachine: (addons-483094)   </devices>
	I0818 18:39:24.407571   15764 main.go:141] libmachine: (addons-483094) </domain>
	I0818 18:39:24.407594   15764 main.go:141] libmachine: (addons-483094) 
	I0818 18:39:24.413473   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:66:aa:dd in network default
	I0818 18:39:24.413926   15764 main.go:141] libmachine: (addons-483094) Ensuring networks are active...
	I0818 18:39:24.413943   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:24.414519   15764 main.go:141] libmachine: (addons-483094) Ensuring network default is active
	I0818 18:39:24.414766   15764 main.go:141] libmachine: (addons-483094) Ensuring network mk-addons-483094 is active
	I0818 18:39:24.415196   15764 main.go:141] libmachine: (addons-483094) Getting domain xml...
	I0818 18:39:24.415758   15764 main.go:141] libmachine: (addons-483094) Creating domain...
	I0818 18:39:25.783780   15764 main.go:141] libmachine: (addons-483094) Waiting to get IP...
	I0818 18:39:25.784472   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:25.784770   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:25.784804   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:25.784764   15786 retry.go:31] will retry after 289.335953ms: waiting for machine to come up
	I0818 18:39:26.075176   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:26.075649   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:26.075669   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:26.075598   15786 retry.go:31] will retry after 259.825296ms: waiting for machine to come up
	I0818 18:39:26.337111   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:26.337576   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:26.337604   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:26.337538   15786 retry.go:31] will retry after 333.382386ms: waiting for machine to come up
	I0818 18:39:26.671950   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:26.672315   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:26.672345   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:26.672296   15786 retry.go:31] will retry after 547.509595ms: waiting for machine to come up
	I0818 18:39:27.220962   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:27.221455   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:27.221484   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:27.221400   15786 retry.go:31] will retry after 625.960376ms: waiting for machine to come up
	I0818 18:39:27.849259   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:27.849689   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:27.849706   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:27.849657   15786 retry.go:31] will retry after 846.775747ms: waiting for machine to come up
	I0818 18:39:28.697533   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:28.697875   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:28.697902   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:28.697831   15786 retry.go:31] will retry after 1.174784407s: waiting for machine to come up
	I0818 18:39:29.874481   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:29.874889   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:29.874916   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:29.874842   15786 retry.go:31] will retry after 1.327652727s: waiting for machine to come up
	I0818 18:39:31.204223   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:31.204687   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:31.204718   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:31.204639   15786 retry.go:31] will retry after 1.243836663s: waiting for machine to come up
	I0818 18:39:32.449942   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:32.450370   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:32.450394   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:32.450331   15786 retry.go:31] will retry after 1.494727458s: waiting for machine to come up
	I0818 18:39:33.946788   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:33.947170   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:33.947203   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:33.947103   15786 retry.go:31] will retry after 2.279766974s: waiting for machine to come up
	I0818 18:39:36.229552   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:36.229944   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:36.229969   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:36.229899   15786 retry.go:31] will retry after 3.273425506s: waiting for machine to come up
	I0818 18:39:39.504724   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:39.505123   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:39.505156   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:39.505072   15786 retry.go:31] will retry after 3.797821303s: waiting for machine to come up
	I0818 18:39:43.306946   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:43.307352   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find current IP address of domain addons-483094 in network mk-addons-483094
	I0818 18:39:43.307411   15764 main.go:141] libmachine: (addons-483094) DBG | I0818 18:39:43.307317   15786 retry.go:31] will retry after 4.699729994s: waiting for machine to come up
	I0818 18:39:48.012080   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.012480   15764 main.go:141] libmachine: (addons-483094) Found IP for machine: 192.168.39.116
	I0818 18:39:48.012506   15764 main.go:141] libmachine: (addons-483094) Reserving static IP address...
	I0818 18:39:48.012536   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has current primary IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.012828   15764 main.go:141] libmachine: (addons-483094) DBG | unable to find host DHCP lease matching {name: "addons-483094", mac: "52:54:00:cd:86:29", ip: "192.168.39.116"} in network mk-addons-483094
	I0818 18:39:48.081064   15764 main.go:141] libmachine: (addons-483094) DBG | Getting to WaitForSSH function...
	I0818 18:39:48.081102   15764 main.go:141] libmachine: (addons-483094) Reserved static IP address: 192.168.39.116
	I0818 18:39:48.081150   15764 main.go:141] libmachine: (addons-483094) Waiting for SSH to be available...
	I0818 18:39:48.083352   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.083696   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.083722   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.083864   15764 main.go:141] libmachine: (addons-483094) DBG | Using SSH client type: external
	I0818 18:39:48.083886   15764 main.go:141] libmachine: (addons-483094) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa (-rw-------)
	I0818 18:39:48.083914   15764 main.go:141] libmachine: (addons-483094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.116 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 18:39:48.083942   15764 main.go:141] libmachine: (addons-483094) DBG | About to run SSH command:
	I0818 18:39:48.083958   15764 main.go:141] libmachine: (addons-483094) DBG | exit 0
	I0818 18:39:48.211143   15764 main.go:141] libmachine: (addons-483094) DBG | SSH cmd err, output: <nil>: 
	I0818 18:39:48.211371   15764 main.go:141] libmachine: (addons-483094) KVM machine creation complete!
	I0818 18:39:48.211734   15764 main.go:141] libmachine: (addons-483094) Calling .GetConfigRaw
	I0818 18:39:48.212280   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:48.212472   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:48.212597   15764 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 18:39:48.212612   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:39:48.213836   15764 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 18:39:48.213850   15764 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 18:39:48.213857   15764 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 18:39:48.213866   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.215875   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.216178   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.216209   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.216345   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.216508   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.216640   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.216747   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.216879   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.217083   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.217096   15764 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 18:39:48.314363   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:39:48.314384   15764 main.go:141] libmachine: Detecting the provisioner...
	I0818 18:39:48.314394   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.316844   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.317120   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.317144   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.317345   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.317526   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.317686   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.317817   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.317955   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.318116   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.318127   15764 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 18:39:48.416096   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 18:39:48.416170   15764 main.go:141] libmachine: found compatible host: buildroot
	I0818 18:39:48.416182   15764 main.go:141] libmachine: Provisioning with buildroot...
	I0818 18:39:48.416194   15764 main.go:141] libmachine: (addons-483094) Calling .GetMachineName
	I0818 18:39:48.416458   15764 buildroot.go:166] provisioning hostname "addons-483094"
	I0818 18:39:48.416483   15764 main.go:141] libmachine: (addons-483094) Calling .GetMachineName
	I0818 18:39:48.416632   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.418922   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.419203   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.419228   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.419412   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.419595   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.419749   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.419855   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.420034   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.420200   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.420212   15764 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-483094 && echo "addons-483094" | sudo tee /etc/hostname
	I0818 18:39:48.534143   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-483094
	
	I0818 18:39:48.534167   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.536671   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.537001   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.537028   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.537246   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.537434   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.537588   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.537723   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.537888   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.538085   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.538110   15764 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-483094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-483094/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-483094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:39:48.646465   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:39:48.646496   15764 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 18:39:48.646532   15764 buildroot.go:174] setting up certificates
	I0818 18:39:48.646547   15764 provision.go:84] configureAuth start
	I0818 18:39:48.646559   15764 main.go:141] libmachine: (addons-483094) Calling .GetMachineName
	I0818 18:39:48.646773   15764 main.go:141] libmachine: (addons-483094) Calling .GetIP
	I0818 18:39:48.649289   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.649644   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.649668   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.649793   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.651947   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.652262   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.652297   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.652465   15764 provision.go:143] copyHostCerts
	I0818 18:39:48.652536   15764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 18:39:48.652671   15764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 18:39:48.652789   15764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 18:39:48.652875   15764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.addons-483094 san=[127.0.0.1 192.168.39.116 addons-483094 localhost minikube]
	I0818 18:39:48.746611   15764 provision.go:177] copyRemoteCerts
	I0818 18:39:48.746665   15764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:39:48.746688   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.749096   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.749416   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.749445   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.749582   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.749804   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.749934   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.750091   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:39:48.829006   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 18:39:48.852681   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 18:39:48.882174   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 18:39:48.904650   15764 provision.go:87] duration metric: took 258.089014ms to configureAuth
	I0818 18:39:48.904678   15764 buildroot.go:189] setting minikube options for container-runtime
	I0818 18:39:48.904848   15764 config.go:182] Loaded profile config "addons-483094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:39:48.904917   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:48.907557   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.907919   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:48.907949   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:48.908086   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:48.908289   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.908446   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:48.908577   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:48.908717   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:48.908884   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:48.908901   15764 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 18:39:49.159028   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 18:39:49.159052   15764 main.go:141] libmachine: Checking connection to Docker...
	I0818 18:39:49.159063   15764 main.go:141] libmachine: (addons-483094) Calling .GetURL
	I0818 18:39:49.160412   15764 main.go:141] libmachine: (addons-483094) DBG | Using libvirt version 6000000
	I0818 18:39:49.162740   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.163239   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.163281   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.163362   15764 main.go:141] libmachine: Docker is up and running!
	I0818 18:39:49.163374   15764 main.go:141] libmachine: Reticulating splines...
	I0818 18:39:49.163399   15764 client.go:171] duration metric: took 25.439815685s to LocalClient.Create
	I0818 18:39:49.163425   15764 start.go:167] duration metric: took 25.439876359s to libmachine.API.Create "addons-483094"
	I0818 18:39:49.163438   15764 start.go:293] postStartSetup for "addons-483094" (driver="kvm2")
	I0818 18:39:49.163456   15764 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:39:49.163479   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.163696   15764 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:39:49.163717   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:49.165582   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.165860   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.165886   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.165996   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:49.166153   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.166305   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:49.166441   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:39:49.245839   15764 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:39:49.250333   15764 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 18:39:49.250366   15764 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 18:39:49.250452   15764 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 18:39:49.250485   15764 start.go:296] duration metric: took 87.039346ms for postStartSetup
	I0818 18:39:49.250526   15764 main.go:141] libmachine: (addons-483094) Calling .GetConfigRaw
	I0818 18:39:49.251072   15764 main.go:141] libmachine: (addons-483094) Calling .GetIP
	I0818 18:39:49.253676   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.254057   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.254089   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.254276   15764 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/config.json ...
	I0818 18:39:49.254476   15764 start.go:128] duration metric: took 25.548851466s to createHost
	I0818 18:39:49.254500   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:49.256746   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.257028   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.257055   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.257256   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:49.257451   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.257684   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.257813   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:49.257968   15764 main.go:141] libmachine: Using SSH client type: native
	I0818 18:39:49.258115   15764 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.116 22 <nil> <nil>}
	I0818 18:39:49.258127   15764 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 18:39:49.355954   15764 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724006389.333647713
	
	I0818 18:39:49.355992   15764 fix.go:216] guest clock: 1724006389.333647713
	I0818 18:39:49.356004   15764 fix.go:229] Guest: 2024-08-18 18:39:49.333647713 +0000 UTC Remote: 2024-08-18 18:39:49.254487665 +0000 UTC m=+25.649012750 (delta=79.160048ms)
	I0818 18:39:49.356038   15764 fix.go:200] guest clock delta is within tolerance: 79.160048ms
	I0818 18:39:49.356049   15764 start.go:83] releasing machines lock for "addons-483094", held for 25.650504481s
	I0818 18:39:49.356073   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.356349   15764 main.go:141] libmachine: (addons-483094) Calling .GetIP
	I0818 18:39:49.358864   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.359194   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.359223   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.359407   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.359920   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.360095   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:39:49.360195   15764 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:39:49.360233   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:49.360394   15764 ssh_runner.go:195] Run: cat /version.json
	I0818 18:39:49.360424   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:39:49.362837   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.363052   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.363165   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.363194   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.363362   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:49.363517   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:49.363539   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:49.363545   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.363708   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:39:49.363740   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:49.363836   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:39:49.363840   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:39:49.363920   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:39:49.364076   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:39:49.436520   15764 ssh_runner.go:195] Run: systemctl --version
	I0818 18:39:49.461313   15764 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 18:39:49.619200   15764 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 18:39:49.625374   15764 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 18:39:49.625428   15764 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:39:49.641459   15764 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 18:39:49.641481   15764 start.go:495] detecting cgroup driver to use...
	I0818 18:39:49.641538   15764 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 18:39:49.657147   15764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:39:49.670333   15764 docker.go:217] disabling cri-docker service (if available) ...
	I0818 18:39:49.670380   15764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 18:39:49.683311   15764 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 18:39:49.696481   15764 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 18:39:49.808487   15764 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 18:39:49.956726   15764 docker.go:233] disabling docker service ...
	I0818 18:39:49.956788   15764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 18:39:49.971357   15764 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 18:39:49.983840   15764 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 18:39:50.117154   15764 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 18:39:50.227912   15764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 18:39:50.241810   15764 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:39:50.260361   15764 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 18:39:50.260422   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.270751   15764 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 18:39:50.270815   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.281086   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.291223   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.301506   15764 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:39:50.311651   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.321763   15764 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.338781   15764 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:39:50.349726   15764 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:39:50.359511   15764 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 18:39:50.359569   15764 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 18:39:50.372370   15764 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:39:50.382185   15764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:39:50.488894   15764 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 18:39:50.628985   15764 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 18:39:50.629085   15764 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 18:39:50.633722   15764 start.go:563] Will wait 60s for crictl version
	I0818 18:39:50.633795   15764 ssh_runner.go:195] Run: which crictl
	I0818 18:39:50.637355   15764 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:39:50.680758   15764 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 18:39:50.680878   15764 ssh_runner.go:195] Run: crio --version
	I0818 18:39:50.708732   15764 ssh_runner.go:195] Run: crio --version
	I0818 18:39:50.737664   15764 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 18:39:50.738950   15764 main.go:141] libmachine: (addons-483094) Calling .GetIP
	I0818 18:39:50.741592   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:50.741861   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:39:50.741896   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:39:50.742085   15764 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 18:39:50.746153   15764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:39:50.758303   15764 kubeadm.go:883] updating cluster {Name:addons-483094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 18:39:50.758402   15764 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:39:50.758443   15764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:39:50.790346   15764 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 18:39:50.790406   15764 ssh_runner.go:195] Run: which lz4
	I0818 18:39:50.794436   15764 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 18:39:50.798549   15764 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 18:39:50.798581   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 18:39:52.070103   15764 crio.go:462] duration metric: took 1.275716427s to copy over tarball
	I0818 18:39:52.070189   15764 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 18:39:54.225830   15764 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.155606126s)
	I0818 18:39:54.225863   15764 crio.go:469] duration metric: took 2.155731972s to extract the tarball
	I0818 18:39:54.225872   15764 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 18:39:54.263005   15764 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:39:54.303622   15764 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 18:39:54.303647   15764 cache_images.go:84] Images are preloaded, skipping loading
	I0818 18:39:54.303659   15764 kubeadm.go:934] updating node { 192.168.39.116 8443 v1.31.0 crio true true} ...
	I0818 18:39:54.303756   15764 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-483094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.116
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:39:54.303816   15764 ssh_runner.go:195] Run: crio config
	I0818 18:39:54.353832   15764 cni.go:84] Creating CNI manager for ""
	I0818 18:39:54.353857   15764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 18:39:54.353869   15764 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 18:39:54.353896   15764 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.116 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-483094 NodeName:addons-483094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.116"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.116 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 18:39:54.354017   15764 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.116
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-483094"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.116
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.116"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 18:39:54.354082   15764 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:39:54.364004   15764 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 18:39:54.364078   15764 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 18:39:54.373296   15764 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0818 18:39:54.389563   15764 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:39:54.405485   15764 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0818 18:39:54.421483   15764 ssh_runner.go:195] Run: grep 192.168.39.116	control-plane.minikube.internal$ /etc/hosts
	I0818 18:39:54.425112   15764 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.116	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:39:54.436659   15764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:39:54.540906   15764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:39:54.557709   15764 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094 for IP: 192.168.39.116
	I0818 18:39:54.557734   15764 certs.go:194] generating shared ca certs ...
	I0818 18:39:54.557769   15764 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:54.557925   15764 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 18:39:54.818917   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt ...
	I0818 18:39:54.818946   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt: {Name:mkf28c86b13b0e191b3661f8445555323102f0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:54.819117   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key ...
	I0818 18:39:54.819133   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key: {Name:mkd16e1802bbd502ffcae72b3214fd821b6d043a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:54.819206   15764 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 18:39:55.005912   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt ...
	I0818 18:39:55.005941   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt: {Name:mk823029d2bfbeee25dcfc18dc5ffc6c485d4f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.006097   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key ...
	I0818 18:39:55.006108   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key: {Name:mk4a876515714dd5a8a2e980bd42506b854fafff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.006180   15764 certs.go:256] generating profile certs ...
	I0818 18:39:55.006242   15764 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.key
	I0818 18:39:55.006253   15764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt with IP's: []
	I0818 18:39:55.177647   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt ...
	I0818 18:39:55.177676   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: {Name:mkd8ad60be4220e5f64ec42ebfd4985ac651f440 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.177839   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.key ...
	I0818 18:39:55.177850   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.key: {Name:mkf6c9b21a48a4a81f58a3306868d4bf4285dd7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.177917   15764 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key.cdd0c4f4
	I0818 18:39:55.177935   15764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt.cdd0c4f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.116]
	I0818 18:39:55.302821   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt.cdd0c4f4 ...
	I0818 18:39:55.302850   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt.cdd0c4f4: {Name:mk1d2c7b694265936e92869785bd2d5e1339bb1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.303004   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key.cdd0c4f4 ...
	I0818 18:39:55.303017   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key.cdd0c4f4: {Name:mk7e3beb525ce78e634151f988e8ac116b17d619 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.303081   15764 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt.cdd0c4f4 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt
	I0818 18:39:55.303167   15764 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key.cdd0c4f4 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key
	I0818 18:39:55.303215   15764 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.key
	I0818 18:39:55.303231   15764 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.crt with IP's: []
	I0818 18:39:55.590421   15764 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.crt ...
	I0818 18:39:55.590447   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.crt: {Name:mk049da4345d2a77eb809c59f3483b17018aad51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.590597   15764 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.key ...
	I0818 18:39:55.590607   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.key: {Name:mk1642417638d7a02b636ce8a833da5984f461bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:55.590754   15764 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 18:39:55.590785   15764 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 18:39:55.590807   15764 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:39:55.590829   15764 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 18:39:55.591431   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:39:55.621255   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 18:39:55.645828   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:39:55.668983   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 18:39:55.691499   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0818 18:39:55.714434   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 18:39:55.737018   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:39:55.759628   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 18:39:55.782191   15764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:39:55.805843   15764 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 18:39:55.823509   15764 ssh_runner.go:195] Run: openssl version
	I0818 18:39:55.829295   15764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:39:55.840130   15764 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:39:55.844852   15764 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:39:55.844898   15764 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:39:55.850680   15764 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:39:55.861143   15764 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:39:55.864997   15764 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 18:39:55.865044   15764 kubeadm.go:392] StartCluster: {Name:addons-483094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-483094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:39:55.865106   15764 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 18:39:55.865146   15764 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 18:39:55.899722   15764 cri.go:89] found id: ""
	I0818 18:39:55.899784   15764 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 18:39:55.909294   15764 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 18:39:55.918304   15764 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 18:39:55.930221   15764 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 18:39:55.930240   15764 kubeadm.go:157] found existing configuration files:
	
	I0818 18:39:55.930283   15764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 18:39:55.939620   15764 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 18:39:55.939672   15764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 18:39:55.951370   15764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 18:39:55.963067   15764 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 18:39:55.963120   15764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 18:39:55.974776   15764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 18:39:55.986542   15764 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 18:39:55.986596   15764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 18:39:55.999070   15764 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 18:39:56.007871   15764 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 18:39:56.007936   15764 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 18:39:56.016670   15764 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 18:39:56.064804   15764 kubeadm.go:310] W0818 18:39:56.049340     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:39:56.065515   15764 kubeadm.go:310] W0818 18:39:56.050168     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:39:56.176448   15764 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 18:40:05.988587   15764 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 18:40:05.988648   15764 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 18:40:05.988739   15764 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 18:40:05.988865   15764 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 18:40:05.988953   15764 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 18:40:05.989029   15764 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 18:40:05.990662   15764 out.go:235]   - Generating certificates and keys ...
	I0818 18:40:05.990769   15764 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 18:40:05.990856   15764 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 18:40:05.990954   15764 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0818 18:40:05.991029   15764 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0818 18:40:05.991113   15764 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0818 18:40:05.991180   15764 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0818 18:40:05.991246   15764 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0818 18:40:05.991401   15764 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-483094 localhost] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0818 18:40:05.991482   15764 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0818 18:40:05.991640   15764 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-483094 localhost] and IPs [192.168.39.116 127.0.0.1 ::1]
	I0818 18:40:05.991726   15764 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0818 18:40:05.991808   15764 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0818 18:40:05.991860   15764 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0818 18:40:05.991920   15764 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 18:40:05.991987   15764 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 18:40:05.992078   15764 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 18:40:05.992135   15764 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 18:40:05.992192   15764 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 18:40:05.992239   15764 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 18:40:05.992308   15764 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 18:40:05.992377   15764 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 18:40:05.993830   15764 out.go:235]   - Booting up control plane ...
	I0818 18:40:05.993931   15764 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 18:40:05.993995   15764 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 18:40:05.994068   15764 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 18:40:05.994166   15764 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 18:40:05.994265   15764 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 18:40:05.994301   15764 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 18:40:05.994408   15764 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 18:40:05.994519   15764 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 18:40:05.994618   15764 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.046274ms
	I0818 18:40:05.994684   15764 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 18:40:05.994761   15764 kubeadm.go:310] [api-check] The API server is healthy after 5.501512049s
	I0818 18:40:05.994881   15764 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 18:40:05.994989   15764 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 18:40:05.995044   15764 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 18:40:05.995201   15764 kubeadm.go:310] [mark-control-plane] Marking the node addons-483094 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 18:40:05.995273   15764 kubeadm.go:310] [bootstrap-token] Using token: 4b2dyc.0shil2r35fbxvtub
	I0818 18:40:05.996603   15764 out.go:235]   - Configuring RBAC rules ...
	I0818 18:40:05.996719   15764 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 18:40:05.996792   15764 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 18:40:05.996927   15764 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 18:40:05.997064   15764 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 18:40:05.997171   15764 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 18:40:05.997250   15764 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 18:40:05.997350   15764 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 18:40:05.997394   15764 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 18:40:05.997433   15764 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 18:40:05.997438   15764 kubeadm.go:310] 
	I0818 18:40:05.997493   15764 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 18:40:05.997503   15764 kubeadm.go:310] 
	I0818 18:40:05.997571   15764 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 18:40:05.997577   15764 kubeadm.go:310] 
	I0818 18:40:05.997602   15764 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 18:40:05.997668   15764 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 18:40:05.997724   15764 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 18:40:05.997734   15764 kubeadm.go:310] 
	I0818 18:40:05.997783   15764 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 18:40:05.997796   15764 kubeadm.go:310] 
	I0818 18:40:05.997856   15764 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 18:40:05.997863   15764 kubeadm.go:310] 
	I0818 18:40:05.997937   15764 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 18:40:05.998009   15764 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 18:40:05.998082   15764 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 18:40:05.998094   15764 kubeadm.go:310] 
	I0818 18:40:05.998216   15764 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 18:40:05.998316   15764 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 18:40:05.998323   15764 kubeadm.go:310] 
	I0818 18:40:05.998390   15764 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4b2dyc.0shil2r35fbxvtub \
	I0818 18:40:05.998479   15764 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 18:40:05.998510   15764 kubeadm.go:310] 	--control-plane 
	I0818 18:40:05.998521   15764 kubeadm.go:310] 
	I0818 18:40:05.998589   15764 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 18:40:05.998598   15764 kubeadm.go:310] 
	I0818 18:40:05.998663   15764 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4b2dyc.0shil2r35fbxvtub \
	I0818 18:40:05.998762   15764 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 18:40:05.998772   15764 cni.go:84] Creating CNI manager for ""
	I0818 18:40:05.998779   15764 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 18:40:06.000289   15764 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 18:40:06.001819   15764 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 18:40:06.014544   15764 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 18:40:06.033510   15764 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 18:40:06.033593   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:06.033625   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-483094 minikube.k8s.io/updated_at=2024_08_18T18_40_06_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=addons-483094 minikube.k8s.io/primary=true
	I0818 18:40:06.059861   15764 ops.go:34] apiserver oom_adj: -16
	I0818 18:40:06.198552   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:06.699251   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:07.198863   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:07.698790   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:08.198970   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:08.699221   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:09.199503   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:09.699325   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:10.199602   15764 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:40:10.289666   15764 kubeadm.go:1113] duration metric: took 4.256135664s to wait for elevateKubeSystemPrivileges
	I0818 18:40:10.289695   15764 kubeadm.go:394] duration metric: took 14.4246545s to StartCluster
	I0818 18:40:10.289717   15764 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:40:10.289833   15764 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:40:10.290293   15764 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:40:10.290489   15764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0818 18:40:10.290514   15764 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.116 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:40:10.290566   15764 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0818 18:40:10.290667   15764 addons.go:69] Setting yakd=true in profile "addons-483094"
	I0818 18:40:10.290673   15764 addons.go:69] Setting ingress-dns=true in profile "addons-483094"
	I0818 18:40:10.290684   15764 addons.go:69] Setting default-storageclass=true in profile "addons-483094"
	I0818 18:40:10.290705   15764 addons.go:69] Setting storage-provisioner=true in profile "addons-483094"
	I0818 18:40:10.290708   15764 addons.go:69] Setting gcp-auth=true in profile "addons-483094"
	I0818 18:40:10.290723   15764 addons.go:234] Setting addon storage-provisioner=true in "addons-483094"
	I0818 18:40:10.290708   15764 addons.go:69] Setting registry=true in profile "addons-483094"
	I0818 18:40:10.290696   15764 addons.go:234] Setting addon yakd=true in "addons-483094"
	I0818 18:40:10.290751   15764 addons.go:69] Setting volumesnapshots=true in profile "addons-483094"
	I0818 18:40:10.290756   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.290760   15764 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-483094"
	I0818 18:40:10.290763   15764 addons.go:234] Setting addon registry=true in "addons-483094"
	I0818 18:40:10.290769   15764 addons.go:234] Setting addon volumesnapshots=true in "addons-483094"
	I0818 18:40:10.290779   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.290779   15764 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-483094"
	I0818 18:40:10.290794   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.290796   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.290724   15764 mustload.go:65] Loading cluster: addons-483094
	I0818 18:40:10.291029   15764 config.go:182] Loaded profile config "addons-483094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:40:10.290699   15764 addons.go:234] Setting addon ingress-dns=true in "addons-483094"
	I0818 18:40:10.291194   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291204   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291217   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291224   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.290750   15764 addons.go:69] Setting volcano=true in profile "addons-483094"
	I0818 18:40:10.291246   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291251   15764 addons.go:234] Setting addon volcano=true in "addons-483094"
	I0818 18:40:10.291274   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291325   15764 addons.go:69] Setting cloud-spanner=true in profile "addons-483094"
	I0818 18:40:10.291345   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291345   15764 addons.go:234] Setting addon cloud-spanner=true in "addons-483094"
	I0818 18:40:10.291369   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291402   15764 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-483094"
	I0818 18:40:10.291450   15764 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-483094"
	I0818 18:40:10.291475   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291544   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291578   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291609   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291650   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291665   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291709   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291732   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291736   15764 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-483094"
	I0818 18:40:10.291760   15764 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-483094"
	I0818 18:40:10.290737   15764 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-483094"
	I0818 18:40:10.290737   15764 config.go:182] Loaded profile config "addons-483094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:40:10.290739   15764 addons.go:69] Setting ingress=true in profile "addons-483094"
	I0818 18:40:10.291787   15764 addons.go:69] Setting inspektor-gadget=true in profile "addons-483094"
	I0818 18:40:10.291795   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291803   15764 addons.go:234] Setting addon inspektor-gadget=true in "addons-483094"
	I0818 18:40:10.291814   15764 addons.go:69] Setting metrics-server=true in profile "addons-483094"
	I0818 18:40:10.291824   15764 addons.go:234] Setting addon ingress=true in "addons-483094"
	I0818 18:40:10.291838   15764 addons.go:234] Setting addon metrics-server=true in "addons-483094"
	I0818 18:40:10.291851   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.291859   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.292140   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292163   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.292163   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292151   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.292193   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292169   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.290731   15764 addons.go:69] Setting helm-tiller=true in profile "addons-483094"
	I0818 18:40:10.292347   15764 addons.go:234] Setting addon helm-tiller=true in "addons-483094"
	I0818 18:40:10.291639   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.291370   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292218   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292494   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292517   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.291815   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292524   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292593   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292751   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.292774   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292947   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.293304   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.293333   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.292500   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.293543   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.299622   15764 out.go:177] * Verifying Kubernetes components...
	I0818 18:40:10.292559   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.301401   15764 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:40:10.312344   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40179
	I0818 18:40:10.312528   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I0818 18:40:10.312566   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I0818 18:40:10.312977   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.313076   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.313136   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.313514   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.313532   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.313646   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.313664   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.313747   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.313760   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.313854   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
	I0818 18:40:10.313945   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.314010   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.314556   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.314586   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.314629   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.327855   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.327906   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.328017   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.328049   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.328088   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0818 18:40:10.328254   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I0818 18:40:10.328339   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.328360   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40231
	I0818 18:40:10.328487   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.328537   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0818 18:40:10.329089   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.329106   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.329184   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.329337   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.329348   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.329402   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.329458   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.331254   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.331422   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.331445   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.331573   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.331586   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.331637   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0818 18:40:10.332041   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.332069   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.332043   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.332270   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.333011   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.333063   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.333297   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.333462   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.333475   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.334160   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.334540   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.334582   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.334818   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.335151   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.335185   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.335389   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.335439   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.343513   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.346097   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.346122   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.346681   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.347282   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.347323   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.347826   15764 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-483094"
	I0818 18:40:10.347869   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.348212   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.348244   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.368891   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46787
	I0818 18:40:10.369933   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0818 18:40:10.370417   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.370775   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0818 18:40:10.371036   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.371061   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.371084   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41163
	I0818 18:40:10.371146   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.371483   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.371549   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.371574   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.371557   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.371621   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.371667   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.372008   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.372610   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.372647   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.373025   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.373042   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.373308   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.373779   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.373821   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.374014   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.374820   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.374844   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.375232   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.375835   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.376870   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0818 18:40:10.377603   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35397
	I0818 18:40:10.378068   15764 out.go:177]   - Using image docker.io/registry:2.8.3
	I0818 18:40:10.378122   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.378070   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.378835   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45299
	I0818 18:40:10.378918   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.378940   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.379243   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.379288   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.379639   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.380091   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.380149   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.380685   15764 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 18:40:10.380704   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.380717   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.381042   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.381569   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.381607   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.382347   15764 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0818 18:40:10.382469   15764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:40:10.382492   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 18:40:10.382511   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.382668   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.382688   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.383715   15764 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0818 18:40:10.383737   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0818 18:40:10.383753   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.383771   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.387525   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.389154   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.389205   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.391051   15764 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0818 18:40:10.392942   15764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0818 18:40:10.392964   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0818 18:40:10.392983   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.393074   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.393097   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.393129   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.393146   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.393175   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.393203   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.393224   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.393367   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.393483   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.393705   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.393808   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.394131   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0818 18:40:10.394270   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.394544   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.394796   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35139
	I0818 18:40:10.394820   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0818 18:40:10.395536   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.395615   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.403567   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.403653   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.403736   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.403757   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.403774   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.403888   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.403900   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.404030   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.404042   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.404185   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.404196   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.404250   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38095
	I0818 18:40:10.404406   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.404406   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.404653   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.404651   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.404710   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.404708   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.404726   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.404922   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.405054   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.405066   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.405310   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.405348   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.405679   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.405964   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.406199   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.407820   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.407820   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.408331   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43599
	I0818 18:40:10.408712   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37363
	I0818 18:40:10.409009   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.409093   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.410167   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.410187   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.410315   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.410327   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.410350   15764 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0818 18:40:10.410536   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.410730   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.411869   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0818 18:40:10.411884   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38247
	I0818 18:40:10.411887   15764 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0818 18:40:10.411895   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0818 18:40:10.411908   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.412713   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.412939   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:10.412961   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:10.414901   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:10.414943   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:10.414957   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:10.414966   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:10.414977   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:10.415293   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38851
	I0818 18:40:10.415372   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0818 18:40:10.415631   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.415415   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:10.415439   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:10.415671   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	W0818 18:40:10.415740   15764 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0818 18:40:10.416037   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.416527   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.416553   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.417860   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0818 18:40:10.418141   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.418593   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.418768   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0818 18:40:10.418803   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.418823   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.419390   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.419407   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.419504   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.419532   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.419542   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.419685   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.419715   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.420240   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.420261   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.420307   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.420361   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.420607   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.420650   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.420744   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.421218   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.421458   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.421895   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.422367   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0818 18:40:10.423186   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.424016   15764 addons.go:234] Setting addon default-storageclass=true in "addons-483094"
	I0818 18:40:10.424057   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:10.424447   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.424481   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.424703   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43881
	I0818 18:40:10.424735   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0818 18:40:10.424753   15764 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0818 18:40:10.424794   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.425501   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.425992   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34463
	I0818 18:40:10.426319   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0818 18:40:10.426479   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.426677   15764 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 18:40:10.426693   15764 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 18:40:10.426727   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.427602   15764 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0818 18:40:10.427801   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.428048   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.427859   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.428101   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.428105   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38045
	I0818 18:40:10.428484   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.428527   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0818 18:40:10.428658   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.428706   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.429146   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.429171   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.429193   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.429263   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.429409   15764 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0818 18:40:10.429423   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0818 18:40:10.429439   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.429804   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.429806   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.430011   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.430098   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.430123   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.430434   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.430981   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.431016   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.431835   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0818 18:40:10.432169   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0818 18:40:10.432299   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.432857   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.433168   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.433323   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.433349   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.434638   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.434758   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.434817   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0818 18:40:10.434884   15764 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0818 18:40:10.436898   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0818 18:40:10.436940   15764 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0818 18:40:10.436959   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.437006   15764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0818 18:40:10.437679   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0818 18:40:10.437695   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0818 18:40:10.437709   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.437765   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.437791   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.437821   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.437875   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.437895   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.437912   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.437939   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.437978   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.439861   15764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 18:40:10.440458   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.440582   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.441034   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.441277   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.441479   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.441482   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.441502   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.441672   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.441809   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.441872   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.442020   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.442165   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.442206   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42765
	I0818 18:40:10.442297   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.442613   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.443114   15764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 18:40:10.443285   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.443304   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.444680   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.444685   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.444688   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.445153   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.445191   15764 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0818 18:40:10.445208   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0818 18:40:10.445226   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.445229   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.445244   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.445193   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.445416   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.446755   15764 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0818 18:40:10.446824   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.447482   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.447748   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.447863   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.448019   15764 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0818 18:40:10.448034   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0818 18:40:10.448049   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.449539   15764 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0818 18:40:10.449543   15764 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0818 18:40:10.450681   15764 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0818 18:40:10.450696   15764 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0818 18:40:10.450712   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.450771   15764 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0818 18:40:10.450783   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0818 18:40:10.450799   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.450876   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.451083   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0818 18:40:10.451501   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.451979   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.451999   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.452066   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.452081   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.452294   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.452865   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.452931   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.453133   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.453559   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:10.453701   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:10.453703   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.454697   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455193   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455339   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455525   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.455548   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455688   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.455767   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.455789   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.455835   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.455948   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.456047   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.456091   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.456336   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.456364   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.456409   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.456506   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.456550   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.456617   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.456654   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.456691   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.456897   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	W0818 18:40:10.458589   15764 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36068->192.168.39.116:22: read: connection reset by peer
	I0818 18:40:10.458616   15764 retry.go:31] will retry after 278.288858ms: ssh: handshake failed: read tcp 192.168.39.1:36068->192.168.39.116:22: read: connection reset by peer
	I0818 18:40:10.458664   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0818 18:40:10.458985   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.459407   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.459425   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.459721   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.459899   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.461303   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.463322   15764 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0818 18:40:10.464728   15764 out.go:177]   - Using image docker.io/busybox:stable
	I0818 18:40:10.466086   15764 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0818 18:40:10.466106   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0818 18:40:10.466127   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.469146   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.469536   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.469559   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.469751   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.469955   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.470125   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.470254   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.473405   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0818 18:40:10.473762   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:10.474174   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:10.474193   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:10.474497   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:10.474671   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:10.475964   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:10.476203   15764 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 18:40:10.476225   15764 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 18:40:10.476240   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:10.478480   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.478782   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:10.478807   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:10.478964   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:10.479131   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:10.479269   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:10.479407   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:10.644059   15764 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0818 18:40:10.644061   15764 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:40:10.876778   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:40:10.877783   15764 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0818 18:40:10.877805   15764 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0818 18:40:10.895768   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0818 18:40:10.895795   15764 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0818 18:40:10.940825   15764 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0818 18:40:10.940847   15764 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0818 18:40:10.945174   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0818 18:40:10.966788   15764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0818 18:40:10.966811   15764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0818 18:40:10.970421   15764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0818 18:40:10.970445   15764 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0818 18:40:11.013881   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0818 18:40:11.036409   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0818 18:40:11.038087   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0818 18:40:11.041420   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0818 18:40:11.041440   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0818 18:40:11.052462   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 18:40:11.058338   15764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 18:40:11.058378   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0818 18:40:11.060829   15764 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0818 18:40:11.060848   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0818 18:40:11.129050   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0818 18:40:11.144708   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0818 18:40:11.144735   15764 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0818 18:40:11.186428   15764 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0818 18:40:11.186452   15764 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0818 18:40:11.188625   15764 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0818 18:40:11.188645   15764 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0818 18:40:11.227222   15764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0818 18:40:11.227248   15764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0818 18:40:11.292815   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0818 18:40:11.293708   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0818 18:40:11.293728   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0818 18:40:11.295523   15764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 18:40:11.295541   15764 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 18:40:11.446757   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0818 18:40:11.446781   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0818 18:40:11.467007   15764 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 18:40:11.467040   15764 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 18:40:11.494022   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0818 18:40:11.494044   15764 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0818 18:40:11.499605   15764 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0818 18:40:11.499625   15764 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0818 18:40:11.519758   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0818 18:40:11.534938   15764 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0818 18:40:11.534965   15764 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0818 18:40:11.595607   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0818 18:40:11.595630   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0818 18:40:11.640947   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 18:40:11.714264   15764 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0818 18:40:11.714286   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0818 18:40:11.724787   15764 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0818 18:40:11.724810   15764 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0818 18:40:11.825625   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0818 18:40:11.825645   15764 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0818 18:40:11.898581   15764 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0818 18:40:11.898604   15764 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0818 18:40:11.971698   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0818 18:40:11.975486   15764 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0818 18:40:11.975511   15764 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0818 18:40:12.135431   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0818 18:40:12.135456   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0818 18:40:12.271085   15764 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0818 18:40:12.271123   15764 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0818 18:40:12.275267   15764 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 18:40:12.275287   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0818 18:40:12.571351   15764 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.927247136s)
	I0818 18:40:12.571406   15764 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0818 18:40:12.571409   15764 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.927281169s)
	I0818 18:40:12.595431   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0818 18:40:12.595458   15764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0818 18:40:12.596327   15764 node_ready.go:35] waiting up to 6m0s for node "addons-483094" to be "Ready" ...
	I0818 18:40:12.605596   15764 node_ready.go:49] node "addons-483094" has status "Ready":"True"
	I0818 18:40:12.605621   15764 node_ready.go:38] duration metric: took 9.271083ms for node "addons-483094" to be "Ready" ...
	I0818 18:40:12.605633   15764 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:40:12.629445   15764 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-qghrl" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:12.645263   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 18:40:12.673411   15764 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0818 18:40:12.673435   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0818 18:40:12.971810   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0818 18:40:12.971832   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0818 18:40:12.993339   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0818 18:40:13.099017   15764 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-483094" context rescaled to 1 replicas
	I0818 18:40:13.187963   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0818 18:40:13.187984   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0818 18:40:13.375718   15764 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0818 18:40:13.375746   15764 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0818 18:40:13.730558   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0818 18:40:14.728414   15764 pod_ready.go:103] pod "coredns-6f6b679f8f-qghrl" in "kube-system" namespace has status "Ready":"False"
	I0818 18:40:15.980724   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.103913036s)
	I0818 18:40:15.980795   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:15.980802   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.035583736s)
	I0818 18:40:15.980812   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:15.980837   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:15.980853   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:15.981123   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:15.981165   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:15.981177   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:15.981186   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:15.981195   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:15.981208   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:15.981219   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:15.981234   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:15.981241   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:15.981398   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:15.981416   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:15.981419   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:15.981453   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:15.981536   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:15.981552   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.231622   15764 pod_ready.go:93] pod "coredns-6f6b679f8f-qghrl" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:16.231659   15764 pod_ready.go:82] duration metric: took 3.602178342s for pod "coredns-6f6b679f8f-qghrl" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:16.231674   15764 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:16.321763   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.307843543s)
	I0818 18:40:16.321812   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.321823   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.321838   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.28539612s)
	I0818 18:40:16.321875   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.321891   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.322135   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.322191   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.322208   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:16.322218   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.322233   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.322243   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.322252   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.322211   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.322294   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.322189   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:16.322532   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.322570   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.322708   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:16.322707   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.322722   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.459344   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:16.459394   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:16.459777   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:16.459797   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:16.459835   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:17.415012   15764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0818 18:40:17.415055   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:17.417839   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:17.418310   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:17.418341   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:17.418496   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:17.418730   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:17.418898   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:17.419054   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:17.915192   15764 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0818 18:40:17.947479   15764 addons.go:234] Setting addon gcp-auth=true in "addons-483094"
	I0818 18:40:17.947536   15764 host.go:66] Checking if "addons-483094" exists ...
	I0818 18:40:17.947822   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:17.947847   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:17.962205   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46323
	I0818 18:40:17.962649   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:17.963126   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:17.963150   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:17.963420   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:17.963856   15764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:40:17.963881   15764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:40:17.978284   15764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I0818 18:40:17.978639   15764 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:40:17.979086   15764 main.go:141] libmachine: Using API Version  1
	I0818 18:40:17.979104   15764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:40:17.979416   15764 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:40:17.979597   15764 main.go:141] libmachine: (addons-483094) Calling .GetState
	I0818 18:40:17.981064   15764 main.go:141] libmachine: (addons-483094) Calling .DriverName
	I0818 18:40:17.981409   15764 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0818 18:40:17.981432   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHHostname
	I0818 18:40:17.983854   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:17.984262   15764 main.go:141] libmachine: (addons-483094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:86:29", ip: ""} in network mk-addons-483094: {Iface:virbr1 ExpiryTime:2024-08-18 19:39:38 +0000 UTC Type:0 Mac:52:54:00:cd:86:29 Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:addons-483094 Clientid:01:52:54:00:cd:86:29}
	I0818 18:40:17.984288   15764 main.go:141] libmachine: (addons-483094) DBG | domain addons-483094 has defined IP address 192.168.39.116 and MAC address 52:54:00:cd:86:29 in network mk-addons-483094
	I0818 18:40:17.984444   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHPort
	I0818 18:40:17.984586   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHKeyPath
	I0818 18:40:17.984713   15764 main.go:141] libmachine: (addons-483094) Calling .GetSSHUsername
	I0818 18:40:17.984814   15764 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/addons-483094/id_rsa Username:docker}
	I0818 18:40:18.356241   15764 pod_ready.go:103] pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace has status "Ready":"False"
	I0818 18:40:18.451253   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.413131345s)
	I0818 18:40:18.451304   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451308   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.39881219s)
	I0818 18:40:18.451327   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.322249422s)
	I0818 18:40:18.451340   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451316   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451355   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.158514344s)
	I0818 18:40:18.451360   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451371   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451398   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451347   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451432   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451476   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.931691057s)
	I0818 18:40:18.451497   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451505   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451560   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.810575347s)
	I0818 18:40:18.451574   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.479845446s)
	I0818 18:40:18.451578   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451589   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451589   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451607   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451667   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.451698   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451705   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451713   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451720   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451860   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.451861   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451871   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.451874   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451884   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451885   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451892   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451893   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451894   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451901   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451904   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451909   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451911   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451917   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.451956   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.451973   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451976   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.451982   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451984   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.451991   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.451998   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.452010   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.452019   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.452385   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.452399   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.453688   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.453720   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.453728   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.453737   15764 addons.go:475] Verifying addon registry=true in "addons-483094"
	I0818 18:40:18.453851   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.453867   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454087   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.454109   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.454115   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454123   15764 addons.go:475] Verifying addon ingress=true in "addons-483094"
	I0818 18:40:18.454241   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.454252   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454281   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.454290   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454299   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.454306   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.454467   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.454494   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.454502   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.454620   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.454642   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.456053   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.456063   15764 addons.go:475] Verifying addon metrics-server=true in "addons-483094"
	I0818 18:40:18.456513   15764 out.go:177] * Verifying registry addon...
	I0818 18:40:18.457420   15764 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-483094 service yakd-dashboard -n yakd-dashboard
	
	I0818 18:40:18.457427   15764 out.go:177] * Verifying ingress addon...
	I0818 18:40:18.459003   15764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0818 18:40:18.459811   15764 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0818 18:40:18.491832   15764 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0818 18:40:18.491859   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:18.491986   15764 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0818 18:40:18.491998   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:18.583725   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.583745   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.584046   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.584089   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.584106   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.938230   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.292925377s)
	W0818 18:40:18.938281   15764 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0818 18:40:18.938304   15764 retry.go:31] will retry after 190.367858ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0818 18:40:18.938359   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.944977912s)
	I0818 18:40:18.938420   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.938438   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.938745   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.938765   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.938780   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:18.938792   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:18.939003   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:18.939035   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:18.939042   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:18.971240   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:18.972284   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:19.129633   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 18:40:19.467973   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:19.471545   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:19.971663   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:19.972230   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:20.488214   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:20.488264   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:20.567360   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.836755613s)
	I0818 18:40:20.567431   15764 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.585997647s)
	I0818 18:40:20.567436   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:20.567457   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:20.567703   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:20.567714   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:20.567723   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:20.567736   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:20.567739   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:20.567963   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:20.568005   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:20.568036   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:20.568052   15764 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-483094"
	I0818 18:40:20.569126   15764 out.go:177] * Verifying csi-hostpath-driver addon...
	I0818 18:40:20.569140   15764 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 18:40:20.570838   15764 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0818 18:40:20.571469   15764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0818 18:40:20.572007   15764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0818 18:40:20.572021   15764 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0818 18:40:20.582424   15764 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0818 18:40:20.582448   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:20.719926   15764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0818 18:40:20.719951   15764 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0818 18:40:20.763552   15764 pod_ready.go:103] pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace has status "Ready":"False"
	I0818 18:40:20.841723   15764 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0818 18:40:20.841745   15764 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0818 18:40:20.898518   15764 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0818 18:40:20.964772   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:20.965067   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:21.076292   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:21.464577   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:21.465263   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:21.508647   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.378949296s)
	I0818 18:40:21.508713   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:21.508729   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:21.508980   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:21.509000   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:21.509015   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:21.509018   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:21.509023   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:21.509461   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:21.509548   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:21.509566   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:21.576202   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:21.967825   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:21.968093   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:22.086526   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:22.157942   15764 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.259388165s)
	I0818 18:40:22.157999   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:22.158015   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:22.158332   15764 main.go:141] libmachine: (addons-483094) DBG | Closing plugin on server side
	I0818 18:40:22.158380   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:22.158402   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:22.158417   15764 main.go:141] libmachine: Making call to close driver server
	I0818 18:40:22.158428   15764 main.go:141] libmachine: (addons-483094) Calling .Close
	I0818 18:40:22.158664   15764 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:40:22.158683   15764 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:40:22.160545   15764 addons.go:475] Verifying addon gcp-auth=true in "addons-483094"
	I0818 18:40:22.162315   15764 out.go:177] * Verifying gcp-auth addon...
	I0818 18:40:22.164165   15764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0818 18:40:22.177781   15764 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0818 18:40:22.177799   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:22.463126   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:22.464104   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:22.580921   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:22.668589   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:22.738488   15764 pod_ready.go:98] pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:22 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.116 HostIPs:[{IP:192.168.39
.116}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-18 18:40:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-18 18:40:14 +0000 UTC,FinishedAt:2024-08-18 18:40:20 +0000 UTC,ContainerID:cri-o://288682b53c85897087ce2642c592e74483dc65dca3277f09f9e8d60feb273398,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://288682b53c85897087ce2642c592e74483dc65dca3277f09f9e8d60feb273398 Started:0xc0014e53d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021604b0} {Name:kube-api-access-rt4mb MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0021604f0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0818 18:40:22.738522   15764 pod_ready.go:82] duration metric: took 6.5068393s for pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace to be "Ready" ...
	E0818 18:40:22.738535   15764 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-t6zm6" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:22 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-18 18:40:10 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.116 HostIPs:[{IP:192.168.39.116}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-18 18:40:10 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-18 18:40:14 +0000 UTC,FinishedAt:2024-08-18 18:40:20 +0000 UTC,ContainerID:cri-o://288682b53c85897087ce2642c592e74483dc65dca3277f09f9e8d60feb273398,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://288682b53c85897087ce2642c592e74483dc65dca3277f09f9e8d60feb273398 Started:0xc0014e53d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0021604b0} {Name:kube-api-access-rt4mb MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0021604f0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0818 18:40:22.738546   15764 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.742875   15764 pod_ready.go:93] pod "etcd-addons-483094" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:22.742892   15764 pod_ready.go:82] duration metric: took 4.338015ms for pod "etcd-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.742903   15764 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.747473   15764 pod_ready.go:93] pod "kube-apiserver-addons-483094" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:22.747489   15764 pod_ready.go:82] duration metric: took 4.57942ms for pod "kube-apiserver-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.747501   15764 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.751151   15764 pod_ready.go:93] pod "kube-controller-manager-addons-483094" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:22.751167   15764 pod_ready.go:82] duration metric: took 3.658541ms for pod "kube-controller-manager-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.751177   15764 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-79skb" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.755091   15764 pod_ready.go:93] pod "kube-proxy-79skb" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:22.755107   15764 pod_ready.go:82] duration metric: took 3.923477ms for pod "kube-proxy-79skb" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.755117   15764 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:22.964096   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:22.964704   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:23.076867   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:23.135964   15764 pod_ready.go:93] pod "kube-scheduler-addons-483094" in "kube-system" namespace has status "Ready":"True"
	I0818 18:40:23.135990   15764 pod_ready.go:82] duration metric: took 380.864569ms for pod "kube-scheduler-addons-483094" in "kube-system" namespace to be "Ready" ...
	I0818 18:40:23.136000   15764 pod_ready.go:39] duration metric: took 10.530353573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:40:23.136018   15764 api_server.go:52] waiting for apiserver process to appear ...
	I0818 18:40:23.136075   15764 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:40:23.167280   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:23.187778   15764 api_server.go:72] duration metric: took 12.897229525s to wait for apiserver process to appear ...
	I0818 18:40:23.187802   15764 api_server.go:88] waiting for apiserver healthz status ...
	I0818 18:40:23.187824   15764 api_server.go:253] Checking apiserver healthz at https://192.168.39.116:8443/healthz ...
	I0818 18:40:23.192161   15764 api_server.go:279] https://192.168.39.116:8443/healthz returned 200:
	ok
	I0818 18:40:23.193119   15764 api_server.go:141] control plane version: v1.31.0
	I0818 18:40:23.193139   15764 api_server.go:131] duration metric: took 5.329578ms to wait for apiserver health ...
	I0818 18:40:23.193148   15764 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 18:40:23.340964   15764 system_pods.go:59] 18 kube-system pods found
	I0818 18:40:23.340998   15764 system_pods.go:61] "coredns-6f6b679f8f-qghrl" [0ad57a4a-3bea-4aae-a41d-7fbabaf0feea] Running
	I0818 18:40:23.341010   15764 system_pods.go:61] "csi-hostpath-attacher-0" [06ba1a58-4a4b-4954-9353-ec5abe630e23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0818 18:40:23.341018   15764 system_pods.go:61] "csi-hostpath-resizer-0" [432b9627-4cb3-4e74-9768-4fae94cc36dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0818 18:40:23.341029   15764 system_pods.go:61] "csi-hostpathplugin-xksf4" [2d309fa3-58bf-4a5e-8e76-38459de0b107] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0818 18:40:23.341036   15764 system_pods.go:61] "etcd-addons-483094" [245d7afe-6d36-4217-bcba-e6297ba4f1f1] Running
	I0818 18:40:23.341044   15764 system_pods.go:61] "kube-apiserver-addons-483094" [5fe8109a-a9f7-44c2-93a3-f95ca2b77e01] Running
	I0818 18:40:23.341049   15764 system_pods.go:61] "kube-controller-manager-addons-483094" [f7bf3ebf-a240-49d6-a21b-4a136a9c40ce] Running
	I0818 18:40:23.341059   15764 system_pods.go:61] "kube-ingress-dns-minikube" [7cdd6d54-a545-4f73-8e7b-95fa3aedf907] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0818 18:40:23.341065   15764 system_pods.go:61] "kube-proxy-79skb" [5e6eb18e-2c70-4df3-9ea4-f4fe95133083] Running
	I0818 18:40:23.341074   15764 system_pods.go:61] "kube-scheduler-addons-483094" [69bd15b6-8593-49b9-95b4-0db0eeb875d8] Running
	I0818 18:40:23.341083   15764 system_pods.go:61] "metrics-server-8988944d9-77bnz" [2aab5d03-7625-4a01-841b-830c70fa8ee2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 18:40:23.341092   15764 system_pods.go:61] "nvidia-device-plugin-daemonset-tvfnx" [a01a3329-cdbd-44ec-b8a3-6bc065c8505a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0818 18:40:23.341113   15764 system_pods.go:61] "registry-6fb4cdfc84-dgwqw" [067b7646-ddf6-4f0b-bc5b-f1f0f7886c10] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0818 18:40:23.341125   15764 system_pods.go:61] "registry-proxy-8h2l6" [6562d7a2-f7f9-476f-9b02-fd1cf7d752f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0818 18:40:23.341135   15764 system_pods.go:61] "snapshot-controller-56fcc65765-xhns2" [9cc6e122-f0b7-48f4-a9f4-f34bcb84c3d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:40:23.341146   15764 system_pods.go:61] "snapshot-controller-56fcc65765-xtsng" [f495d714-cc97-4377-867c-2ba6f686ce79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:40:23.341154   15764 system_pods.go:61] "storage-provisioner" [bb5b5ca7-00f4-4361-b31f-7230472ba62f] Running
	I0818 18:40:23.341166   15764 system_pods.go:61] "tiller-deploy-b48cc5f79-84wz4" [14ad1b2b-905b-495b-a83a-4e89d1a1c04f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0818 18:40:23.341174   15764 system_pods.go:74] duration metric: took 148.018507ms to wait for pod list to return data ...
	I0818 18:40:23.341185   15764 default_sa.go:34] waiting for default service account to be created ...
	I0818 18:40:23.464356   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:23.465147   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:23.535870   15764 default_sa.go:45] found service account: "default"
	I0818 18:40:23.535892   15764 default_sa.go:55] duration metric: took 194.700564ms for default service account to be created ...
	I0818 18:40:23.535901   15764 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 18:40:23.576597   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:23.667307   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:23.743940   15764 system_pods.go:86] 18 kube-system pods found
	I0818 18:40:23.743974   15764 system_pods.go:89] "coredns-6f6b679f8f-qghrl" [0ad57a4a-3bea-4aae-a41d-7fbabaf0feea] Running
	I0818 18:40:23.743985   15764 system_pods.go:89] "csi-hostpath-attacher-0" [06ba1a58-4a4b-4954-9353-ec5abe630e23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0818 18:40:23.743995   15764 system_pods.go:89] "csi-hostpath-resizer-0" [432b9627-4cb3-4e74-9768-4fae94cc36dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0818 18:40:23.744007   15764 system_pods.go:89] "csi-hostpathplugin-xksf4" [2d309fa3-58bf-4a5e-8e76-38459de0b107] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0818 18:40:23.744012   15764 system_pods.go:89] "etcd-addons-483094" [245d7afe-6d36-4217-bcba-e6297ba4f1f1] Running
	I0818 18:40:23.744020   15764 system_pods.go:89] "kube-apiserver-addons-483094" [5fe8109a-a9f7-44c2-93a3-f95ca2b77e01] Running
	I0818 18:40:23.744026   15764 system_pods.go:89] "kube-controller-manager-addons-483094" [f7bf3ebf-a240-49d6-a21b-4a136a9c40ce] Running
	I0818 18:40:23.744036   15764 system_pods.go:89] "kube-ingress-dns-minikube" [7cdd6d54-a545-4f73-8e7b-95fa3aedf907] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0818 18:40:23.744045   15764 system_pods.go:89] "kube-proxy-79skb" [5e6eb18e-2c70-4df3-9ea4-f4fe95133083] Running
	I0818 18:40:23.744051   15764 system_pods.go:89] "kube-scheduler-addons-483094" [69bd15b6-8593-49b9-95b4-0db0eeb875d8] Running
	I0818 18:40:23.744063   15764 system_pods.go:89] "metrics-server-8988944d9-77bnz" [2aab5d03-7625-4a01-841b-830c70fa8ee2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 18:40:23.744081   15764 system_pods.go:89] "nvidia-device-plugin-daemonset-tvfnx" [a01a3329-cdbd-44ec-b8a3-6bc065c8505a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0818 18:40:23.744090   15764 system_pods.go:89] "registry-6fb4cdfc84-dgwqw" [067b7646-ddf6-4f0b-bc5b-f1f0f7886c10] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0818 18:40:23.744101   15764 system_pods.go:89] "registry-proxy-8h2l6" [6562d7a2-f7f9-476f-9b02-fd1cf7d752f3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0818 18:40:23.744111   15764 system_pods.go:89] "snapshot-controller-56fcc65765-xhns2" [9cc6e122-f0b7-48f4-a9f4-f34bcb84c3d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:40:23.744121   15764 system_pods.go:89] "snapshot-controller-56fcc65765-xtsng" [f495d714-cc97-4377-867c-2ba6f686ce79] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:40:23.744127   15764 system_pods.go:89] "storage-provisioner" [bb5b5ca7-00f4-4361-b31f-7230472ba62f] Running
	I0818 18:40:23.744135   15764 system_pods.go:89] "tiller-deploy-b48cc5f79-84wz4" [14ad1b2b-905b-495b-a83a-4e89d1a1c04f] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0818 18:40:23.744145   15764 system_pods.go:126] duration metric: took 208.238415ms to wait for k8s-apps to be running ...
	I0818 18:40:23.744158   15764 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 18:40:23.744209   15764 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:40:23.786659   15764 system_svc.go:56] duration metric: took 42.495188ms WaitForService to wait for kubelet
	I0818 18:40:23.786684   15764 kubeadm.go:582] duration metric: took 13.496141334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:40:23.786705   15764 node_conditions.go:102] verifying NodePressure condition ...
	I0818 18:40:23.936424   15764 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:40:23.936451   15764 node_conditions.go:123] node cpu capacity is 2
	I0818 18:40:23.936462   15764 node_conditions.go:105] duration metric: took 149.752888ms to run NodePressure ...
	I0818 18:40:23.936473   15764 start.go:241] waiting for startup goroutines ...
	I0818 18:40:23.964225   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:23.964655   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:24.076447   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:24.167807   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:24.463528   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:24.465011   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:24.575872   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:24.668428   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:24.964745   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:24.965147   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:25.076817   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:25.168303   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:25.599091   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:25.599522   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:25.600133   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:25.667715   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:25.964471   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:25.964764   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:26.076904   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:26.169344   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:26.463032   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:26.466730   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:26.576602   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:26.667992   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:26.964352   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:26.965149   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:27.076540   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:27.167425   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:27.463913   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:27.464877   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:27.579520   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:27.668739   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:27.965073   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:27.965292   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:28.075816   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:28.167918   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:28.464957   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:28.465302   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:28.575759   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:28.669052   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:28.963129   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:28.964996   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:29.076202   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:29.167636   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:29.464566   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:29.464826   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:29.575645   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:29.668202   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:29.963018   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:29.964067   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:30.077611   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:30.168965   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:30.464019   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:30.464211   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:30.576155   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:30.667741   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:30.963870   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:30.964072   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:31.078038   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:31.168539   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:31.463040   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:31.464576   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:31.576448   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:31.668237   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:31.964810   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:31.965518   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:32.078917   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:32.168272   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:32.462740   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:32.465169   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:32.576344   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:32.667806   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:32.964706   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:32.965022   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:33.076972   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:33.167558   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:33.463667   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:33.464875   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:33.577097   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:33.667991   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:33.962774   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:33.965090   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:34.182547   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:34.183015   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:34.463007   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:34.464949   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:34.577233   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:34.668306   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:34.963342   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:34.964351   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:35.076701   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:35.167912   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:35.464839   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:35.464990   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:35.577334   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:35.669400   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:35.963562   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:35.964795   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:36.076681   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:36.167642   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:36.463339   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:36.463954   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:36.577837   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:36.670343   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:36.962728   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:36.964758   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:37.076520   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:37.168338   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:37.464259   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:37.464582   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:37.575946   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:37.668552   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:37.964407   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:37.964553   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:38.077847   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:38.167940   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:38.463793   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:38.464233   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:38.576801   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:38.676708   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:38.964384   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:38.964439   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:39.076760   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:39.168369   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:39.462589   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:39.464485   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:39.576310   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:39.667431   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:39.963850   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:39.964365   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:40.076738   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:40.168132   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:40.462937   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:40.464776   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:40.576885   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:40.668804   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:40.964573   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:40.968288   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:41.077263   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:41.167367   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:41.462837   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:41.464051   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:41.576041   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:41.667255   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:41.964755   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:41.964969   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:42.077350   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:42.167266   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:42.462687   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:42.464459   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:42.576000   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:42.667306   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:42.963550   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:42.965673   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:43.077052   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:43.167476   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:43.464661   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:43.465069   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:43.575471   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:43.668128   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:43.963983   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:43.964431   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:44.075838   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:44.168374   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:44.464560   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:44.464686   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:45.044883   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:45.046452   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:45.048755   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:45.049284   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:45.075922   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:45.168814   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:45.466127   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:45.466544   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:45.576662   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:45.667800   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:45.964241   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:45.964994   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:46.078213   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:46.168271   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:46.463353   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:46.464719   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:46.576761   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:46.668038   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:46.964520   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:46.964913   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:47.076589   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:47.167333   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:47.463444   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:47.464789   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:47.576875   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:47.667770   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:47.964208   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:47.964942   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:48.075892   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:48.168264   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:48.463702   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:48.464543   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:48.577384   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:48.670694   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:48.963791   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:48.965493   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:49.076603   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:49.168111   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:49.462739   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:49.464160   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:49.575473   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:49.667518   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:49.963203   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:49.964977   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:50.076418   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:50.168220   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:50.463884   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:50.464290   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:50.576438   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:50.667868   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:50.963695   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:50.964122   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:51.077144   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:51.167361   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:51.462929   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:51.465360   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:51.575737   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:51.668172   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:51.964318   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:51.964546   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:52.076185   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:52.167112   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:52.462303   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:52.464248   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:52.576897   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:52.668369   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:52.965063   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:52.965097   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:53.076054   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:53.167164   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:53.463124   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:53.465694   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:53.576054   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:53.667825   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:53.964406   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:53.965550   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:54.076891   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:54.168621   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:54.463043   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:40:54.464634   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:54.576989   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:54.667680   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:54.967394   15764 kapi.go:107] duration metric: took 36.508368287s to wait for kubernetes.io/minikube-addons=registry ...
	I0818 18:40:54.967405   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:55.076899   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:55.169027   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:55.464091   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:55.575573   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:55.668142   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:55.964059   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:56.076585   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:56.167897   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:56.464752   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:56.578941   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:56.668188   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:56.964425   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:57.077203   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:57.167672   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:57.464552   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:57.576374   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:57.667935   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:58.132671   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:58.132944   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:58.169688   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:58.465324   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:58.576014   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:58.667346   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:58.964307   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:59.076056   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:59.166874   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:59.464223   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:59.578004   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:59.667622   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:59.965153   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:00.076483   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:00.174660   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:00.464343   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:00.576954   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:00.669473   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:00.965106   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:01.076916   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:01.168011   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:01.464177   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:01.576870   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:01.671081   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:01.965231   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:02.076046   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:02.167944   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:02.464147   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:02.575574   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:02.668275   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:02.964137   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:03.076350   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:03.169491   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:03.464159   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:03.577095   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:03.668414   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:03.965055   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:04.075920   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:04.176112   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:04.464431   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:04.575882   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:04.668732   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:04.964525   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:05.076298   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:05.167993   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:05.465297   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:05.576991   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:05.668278   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:05.964177   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:06.076845   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:06.167744   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:06.631391   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:06.631506   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:06.728889   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:06.965483   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:07.076167   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:07.168216   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:07.464301   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:07.576931   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:07.667124   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:07.964311   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:08.076495   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:08.167830   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:08.465151   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:08.576544   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:08.667969   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:08.964872   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:09.076895   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:09.169140   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:09.464180   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:09.576311   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:09.668546   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:09.965709   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:10.076292   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:10.168497   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:10.464488   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:10.576806   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:10.668500   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:10.964828   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:11.076520   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:11.168656   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:11.464290   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:11.575885   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:11.668104   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:11.964221   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:12.075843   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:12.181821   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:12.465164   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:12.578322   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:12.874814   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:12.964414   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:13.076556   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:13.168204   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:13.463848   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:13.576307   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:13.668020   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:13.964360   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:14.076060   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:14.167456   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:14.542784   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:14.642760   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:14.743049   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:14.963823   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:15.076335   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:15.169564   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:15.465244   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:15.578267   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:15.668721   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:15.964437   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:16.076110   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:16.168160   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:16.464865   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:16.576799   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:16.668307   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:16.964462   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:17.076203   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:17.168440   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:17.778368   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:17.778992   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:17.779213   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:17.964721   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:18.076211   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:18.168401   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:18.473367   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:18.576949   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:18.667061   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:18.964795   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:19.075614   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:19.168336   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:19.464119   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:19.579495   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:19.671073   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:19.964336   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:20.083323   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:20.167496   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:20.465194   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:20.576409   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:20.668105   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:20.964383   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:21.075903   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:21.169554   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:21.464292   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:21.576811   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:21.668814   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:21.965017   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:22.077944   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:22.177503   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:22.465065   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:22.575950   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:22.668167   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:22.964502   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:23.075958   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:23.168490   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:23.465509   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:23.577896   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:23.676531   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:23.965693   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:24.076593   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:24.167985   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:24.463961   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:24.576114   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:24.667144   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:24.965316   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:25.077941   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:25.168382   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:25.467510   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:25.576457   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:25.667499   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:25.964220   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:26.077010   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:26.175890   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:26.465188   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:26.576264   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:26.667767   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:26.979587   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:27.078641   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:27.178422   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:27.464579   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:27.576507   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:27.667477   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:27.964714   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:28.075884   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:28.171105   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:28.754482   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:28.755462   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:28.755669   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:28.964181   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:29.076834   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:29.168360   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:29.464161   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:29.575320   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:29.667957   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:29.973163   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:30.075922   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:30.167736   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:30.465391   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:30.575904   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:30.667576   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:30.964412   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:31.076655   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:31.167436   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:31.464577   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:31.582858   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:31.668250   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:31.964384   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:32.076200   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:32.174015   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:32.463882   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:32.576447   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:32.667804   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:32.974613   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:33.080448   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:33.168339   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:33.470526   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:33.577293   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:33.669995   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:33.964918   15764 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:41:34.077640   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:34.167474   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:34.560488   15764 kapi.go:107] duration metric: took 1m16.100671823s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0818 18:41:34.660920   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:34.667947   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:35.076989   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:35.176071   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:35.576288   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:35.667504   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:36.076337   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:36.167491   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:36.576006   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:36.669852   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:37.158877   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:37.167693   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:37.577673   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:37.668222   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:38.076777   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:38.168095   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:38.576763   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:38.677323   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:39.081589   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:39.177113   15764 kapi.go:107] duration metric: took 1m17.012942188s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0818 18:41:39.178627   15764 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-483094 cluster.
	I0818 18:41:39.180045   15764 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0818 18:41:39.181691   15764 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0818 18:41:39.577609   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:40.076243   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:40.580091   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:41.076627   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:41.576982   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:42.076793   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:42.577049   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:43.077012   15764 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:41:43.577544   15764 kapi.go:107] duration metric: took 1m23.006072086s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0818 18:41:43.579293   15764 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, helm-tiller, nvidia-device-plugin, metrics-server, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0818 18:41:43.580573   15764 addons.go:510] duration metric: took 1m33.290007119s for enable addons: enabled=[cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher helm-tiller nvidia-device-plugin metrics-server yakd default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0818 18:41:43.580608   15764 start.go:246] waiting for cluster config update ...
	I0818 18:41:43.580624   15764 start.go:255] writing updated cluster config ...
	I0818 18:41:43.580854   15764 ssh_runner.go:195] Run: rm -f paused
	I0818 18:41:43.630959   15764 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 18:41:43.632711   15764 out.go:177] * Done! kubectl is now configured to use "addons-483094" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.104752433Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006885104724366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cc1ea86-d986-486e-87ea-66cea66cfce7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.105455968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5f2e944-ae73-405f-8bd1-b3fa9120a78b name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.105516052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5f2e944-ae73-405f-8bd1-b3fa9120a78b name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.105775620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c2d29ed42516dfaf00bab1a838c272670f4b671e56dc9e9323ef64f43654e11,PodSandboxId:227d7bf7cfb04a1848a26693acc7cfb1ae1d1bfd8bc273bc2059a8ca55367107,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724006716265043239,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvkpg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff73ba9d4dff434aa03df0d0bdd1b1b0de753556554e7e3335fd082e29c2229,PodSandboxId:0c7f1370497b8da151c2909ee57d9584fd228b10478cd9fabce97d63822dfbd3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724006574730375498,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90ac7d42-930b-44c7-ad80-7da227b904c7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446bc242344d894dca8c05396944e19be7a2d8f25baf1599ca4a184fa0f31a3,PodSandboxId:1596f335b37ca9d956f32ea5453458571d030b3eeea0f83d3afcb2a979492d44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724006507524870724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 416e0a4e-2e7a-40b1-9
4c0-c6346c58a7cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82,PodSandboxId:4eb31a7309e2d9e00d47f9e19e2cf9196209e2e7af5cbaa64cf0c33dd0bf8d89,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724006460612151163,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-77bnz,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 2aab5d03-7625-4a01-841b-830c70fa8ee2,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26,PodSandboxId:87d2727757b886908b73d3e1e4b5e2879f8d122a25bc0c44aa35e09926b74c3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724006417727527371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5b5ca7-00f4-4361-b31f-7230472ba62f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b,PodSandboxId:e176d7e5b3a19bd795c31c4059d175b5e4852f1268248e6e5338884f49f28183,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724006413989541129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-qghrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad57a4a-3bea-4aae-a41d-7fbabaf0feea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd,PodSandboxId:550f8b6bda1ed09890a07874a4b9eb99f2164b8fbfd9da181ecb6d73d1657b00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724006411666469717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79skb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6eb18e-2c70-4df3-9ea4-f4fe95133083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255,PodSandboxId:8a106fb4afc306d3ad2c6defe51444723f71776e181e97f57260826960ec94ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724006399842649059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a9117501a7e8b6f167fdb23ec7a923,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341,PodSandboxId:0902496d87924dd1fb68f933449dbdaa8468823e905ea861b33b7833ed1446de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724006399856649917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5358552ce31d2587d5ceaafc457b3494,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637,PodSandboxId:b7c7c626556f444cdfd36625c4520758ad1799e0bda355f0acb81aa999270181,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNIN
G,CreatedAt:1724006399855025711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb947c5b4b80f32c6c8dfdb9c646073,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326,PodSandboxId:a08e1e720570d8e0df2b88c0c3dc5f8915f6765f22d3d1d70ad01ea15359a661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17240
06399678967782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5be7b7eabb1824f09b6daa59a48bc50,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5f2e944-ae73-405f-8bd1-b3fa9120a78b name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.144140803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fda9014-4116-4e41-b684-b531cfeeed81 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.144275389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fda9014-4116-4e41-b684-b531cfeeed81 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.145773213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0f92c062-5d09-43ed-88af-004c3750e7e4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.147412734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006885147381594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f92c062-5d09-43ed-88af-004c3750e7e4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.148166285Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=173d0871-569e-415a-95b4-c39cbdc7f4f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.148263701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=173d0871-569e-415a-95b4-c39cbdc7f4f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.148545674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c2d29ed42516dfaf00bab1a838c272670f4b671e56dc9e9323ef64f43654e11,PodSandboxId:227d7bf7cfb04a1848a26693acc7cfb1ae1d1bfd8bc273bc2059a8ca55367107,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724006716265043239,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvkpg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff73ba9d4dff434aa03df0d0bdd1b1b0de753556554e7e3335fd082e29c2229,PodSandboxId:0c7f1370497b8da151c2909ee57d9584fd228b10478cd9fabce97d63822dfbd3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724006574730375498,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90ac7d42-930b-44c7-ad80-7da227b904c7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446bc242344d894dca8c05396944e19be7a2d8f25baf1599ca4a184fa0f31a3,PodSandboxId:1596f335b37ca9d956f32ea5453458571d030b3eeea0f83d3afcb2a979492d44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724006507524870724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 416e0a4e-2e7a-40b1-9
4c0-c6346c58a7cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82,PodSandboxId:4eb31a7309e2d9e00d47f9e19e2cf9196209e2e7af5cbaa64cf0c33dd0bf8d89,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724006460612151163,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-77bnz,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 2aab5d03-7625-4a01-841b-830c70fa8ee2,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26,PodSandboxId:87d2727757b886908b73d3e1e4b5e2879f8d122a25bc0c44aa35e09926b74c3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724006417727527371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5b5ca7-00f4-4361-b31f-7230472ba62f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b,PodSandboxId:e176d7e5b3a19bd795c31c4059d175b5e4852f1268248e6e5338884f49f28183,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724006413989541129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-qghrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad57a4a-3bea-4aae-a41d-7fbabaf0feea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd,PodSandboxId:550f8b6bda1ed09890a07874a4b9eb99f2164b8fbfd9da181ecb6d73d1657b00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724006411666469717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79skb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6eb18e-2c70-4df3-9ea4-f4fe95133083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255,PodSandboxId:8a106fb4afc306d3ad2c6defe51444723f71776e181e97f57260826960ec94ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724006399842649059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a9117501a7e8b6f167fdb23ec7a923,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341,PodSandboxId:0902496d87924dd1fb68f933449dbdaa8468823e905ea861b33b7833ed1446de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724006399856649917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5358552ce31d2587d5ceaafc457b3494,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637,PodSandboxId:b7c7c626556f444cdfd36625c4520758ad1799e0bda355f0acb81aa999270181,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNIN
G,CreatedAt:1724006399855025711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb947c5b4b80f32c6c8dfdb9c646073,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326,PodSandboxId:a08e1e720570d8e0df2b88c0c3dc5f8915f6765f22d3d1d70ad01ea15359a661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17240
06399678967782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5be7b7eabb1824f09b6daa59a48bc50,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=173d0871-569e-415a-95b4-c39cbdc7f4f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.189544450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=371586c9-399c-4e74-b6af-be584cac9417 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.189639428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=371586c9-399c-4e74-b6af-be584cac9417 name=/runtime.v1.RuntimeService/Version
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.191081487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bfde58a7-d4cd-4f66-b73a-b32842fd5956 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.192683338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006885192655439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bfde58a7-d4cd-4f66-b73a-b32842fd5956 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.193535617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17075acf-fd9d-4dd8-a5b0-abf8bb0bf65a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.193590074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17075acf-fd9d-4dd8-a5b0-abf8bb0bf65a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.193859298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c2d29ed42516dfaf00bab1a838c272670f4b671e56dc9e9323ef64f43654e11,PodSandboxId:227d7bf7cfb04a1848a26693acc7cfb1ae1d1bfd8bc273bc2059a8ca55367107,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724006716265043239,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvkpg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff73ba9d4dff434aa03df0d0bdd1b1b0de753556554e7e3335fd082e29c2229,PodSandboxId:0c7f1370497b8da151c2909ee57d9584fd228b10478cd9fabce97d63822dfbd3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724006574730375498,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90ac7d42-930b-44c7-ad80-7da227b904c7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446bc242344d894dca8c05396944e19be7a2d8f25baf1599ca4a184fa0f31a3,PodSandboxId:1596f335b37ca9d956f32ea5453458571d030b3eeea0f83d3afcb2a979492d44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724006507524870724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 416e0a4e-2e7a-40b1-9
4c0-c6346c58a7cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82,PodSandboxId:4eb31a7309e2d9e00d47f9e19e2cf9196209e2e7af5cbaa64cf0c33dd0bf8d89,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724006460612151163,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-77bnz,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 2aab5d03-7625-4a01-841b-830c70fa8ee2,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26,PodSandboxId:87d2727757b886908b73d3e1e4b5e2879f8d122a25bc0c44aa35e09926b74c3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724006417727527371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5b5ca7-00f4-4361-b31f-7230472ba62f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b,PodSandboxId:e176d7e5b3a19bd795c31c4059d175b5e4852f1268248e6e5338884f49f28183,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724006413989541129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-qghrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad57a4a-3bea-4aae-a41d-7fbabaf0feea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd,PodSandboxId:550f8b6bda1ed09890a07874a4b9eb99f2164b8fbfd9da181ecb6d73d1657b00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724006411666469717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79skb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6eb18e-2c70-4df3-9ea4-f4fe95133083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255,PodSandboxId:8a106fb4afc306d3ad2c6defe51444723f71776e181e97f57260826960ec94ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724006399842649059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a9117501a7e8b6f167fdb23ec7a923,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341,PodSandboxId:0902496d87924dd1fb68f933449dbdaa8468823e905ea861b33b7833ed1446de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724006399856649917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5358552ce31d2587d5ceaafc457b3494,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637,PodSandboxId:b7c7c626556f444cdfd36625c4520758ad1799e0bda355f0acb81aa999270181,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNIN
G,CreatedAt:1724006399855025711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb947c5b4b80f32c6c8dfdb9c646073,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326,PodSandboxId:a08e1e720570d8e0df2b88c0c3dc5f8915f6765f22d3d1d70ad01ea15359a661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17240
06399678967782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5be7b7eabb1824f09b6daa59a48bc50,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17075acf-fd9d-4dd8-a5b0-abf8bb0bf65a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.235714052Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a9fc89c-c1e7-465a-85f1-5703d2884f1a name=/runtime.v1.RuntimeService/Version
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.235792073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a9fc89c-c1e7-465a-85f1-5703d2884f1a name=/runtime.v1.RuntimeService/Version
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.237196634Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c9516af-3fce-41a8-a5b7-ef9865d38199 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.238632589Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006885238600600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c9516af-3fce-41a8-a5b7-ef9865d38199 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.239278202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b513f744-baf3-4fdf-9df7-44981bd48edb name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.239468800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b513f744-baf3-4fdf-9df7-44981bd48edb name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 18:48:05 addons-483094 crio[679]: time="2024-08-18 18:48:05.239879753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c2d29ed42516dfaf00bab1a838c272670f4b671e56dc9e9323ef64f43654e11,PodSandboxId:227d7bf7cfb04a1848a26693acc7cfb1ae1d1bfd8bc273bc2059a8ca55367107,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724006716265043239,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-lvkpg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b8ef0a2e-8ffd-41a0-8240-ba43a0cf603c,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aff73ba9d4dff434aa03df0d0bdd1b1b0de753556554e7e3335fd082e29c2229,PodSandboxId:0c7f1370497b8da151c2909ee57d9584fd228b10478cd9fabce97d63822dfbd3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724006574730375498,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 90ac7d42-930b-44c7-ad80-7da227b904c7,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a446bc242344d894dca8c05396944e19be7a2d8f25baf1599ca4a184fa0f31a3,PodSandboxId:1596f335b37ca9d956f32ea5453458571d030b3eeea0f83d3afcb2a979492d44,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724006507524870724,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 416e0a4e-2e7a-40b1-9
4c0-c6346c58a7cc,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82,PodSandboxId:4eb31a7309e2d9e00d47f9e19e2cf9196209e2e7af5cbaa64cf0c33dd0bf8d89,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1724006460612151163,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-77bnz,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 2aab5d03-7625-4a01-841b-830c70fa8ee2,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26,PodSandboxId:87d2727757b886908b73d3e1e4b5e2879f8d122a25bc0c44aa35e09926b74c3a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724006417727527371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb5b5ca7-00f4-4361-b31f-7230472ba62f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b,PodSandboxId:e176d7e5b3a19bd795c31c4059d175b5e4852f1268248e6e5338884f49f28183,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724006413989541129,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b
679f8f-qghrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ad57a4a-3bea-4aae-a41d-7fbabaf0feea,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd,PodSandboxId:550f8b6bda1ed09890a07874a4b9eb99f2164b8fbfd9da181ecb6d73d1657b00,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96
f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724006411666469717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-79skb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e6eb18e-2c70-4df3-9ea4-f4fe95133083,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255,PodSandboxId:8a106fb4afc306d3ad2c6defe51444723f71776e181e97f57260826960ec94ce,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724006399842649059,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80a9117501a7e8b6f167fdb23ec7a923,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341,PodSandboxId:0902496d87924dd1fb68f933449dbdaa8468823e905ea861b33b7833ed1446de,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724006399856649917,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5358552ce31d2587d5ceaafc457b3494,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637,PodSandboxId:b7c7c626556f444cdfd36625c4520758ad1799e0bda355f0acb81aa999270181,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNIN
G,CreatedAt:1724006399855025711,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fb947c5b4b80f32c6c8dfdb9c646073,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326,PodSandboxId:a08e1e720570d8e0df2b88c0c3dc5f8915f6765f22d3d1d70ad01ea15359a661,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:17240
06399678967782,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-483094,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5be7b7eabb1824f09b6daa59a48bc50,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b513f744-baf3-4fdf-9df7-44981bd48edb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c2d29ed42516       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   227d7bf7cfb04       hello-world-app-55bf9c44b4-lvkpg
	aff73ba9d4dff       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         5 minutes ago       Running             nginx                     0                   0c7f1370497b8       nginx
	a446bc242344d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   1596f335b37ca       busybox
	7c224479822a8       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   4eb31a7309e2d       metrics-server-8988944d9-77bnz
	554234cfc6381       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   87d2727757b88       storage-provisioner
	068521e6cfce6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   e176d7e5b3a19       coredns-6f6b679f8f-qghrl
	2f4a9b244767a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        7 minutes ago       Running             kube-proxy                0                   550f8b6bda1ed       kube-proxy-79skb
	255d041856e84       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   0902496d87924       etcd-addons-483094
	30ace9fbe7263       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        8 minutes ago       Running             kube-apiserver            0                   b7c7c626556f4       kube-apiserver-addons-483094
	ffb1c08f476fe       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        8 minutes ago       Running             kube-controller-manager   0                   8a106fb4afc30       kube-controller-manager-addons-483094
	3e6be4f20fdef       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        8 minutes ago       Running             kube-scheduler            0                   a08e1e720570d       kube-scheduler-addons-483094
	
	
	==> coredns [068521e6cfce6791f3e91a1ed139b596c6229d26b0f9bc48cf02f20e566d959b] <==
	[INFO] 10.244.0.7:48292 - 48904 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00198094s
	[INFO] 10.244.0.7:47738 - 41205 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000063438s
	[INFO] 10.244.0.7:47738 - 27126 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090932s
	[INFO] 10.244.0.7:42692 - 14150 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061324s
	[INFO] 10.244.0.7:42692 - 11332 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045669s
	[INFO] 10.244.0.7:52227 - 55459 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000062698s
	[INFO] 10.244.0.7:52227 - 18849 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093155s
	[INFO] 10.244.0.7:49294 - 24259 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000072208s
	[INFO] 10.244.0.7:49294 - 34268 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000039388s
	[INFO] 10.244.0.7:39873 - 23137 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007882s
	[INFO] 10.244.0.7:39873 - 45668 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000072443s
	[INFO] 10.244.0.7:48201 - 31771 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000029199s
	[INFO] 10.244.0.7:48201 - 27929 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091981s
	[INFO] 10.244.0.7:35602 - 25551 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000041838s
	[INFO] 10.244.0.7:35602 - 18889 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037474s
	[INFO] 10.244.0.22:53928 - 18241 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000409347s
	[INFO] 10.244.0.22:35985 - 332 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000072186s
	[INFO] 10.244.0.22:46878 - 4684 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108381s
	[INFO] 10.244.0.22:40624 - 28689 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000069645s
	[INFO] 10.244.0.22:52391 - 7396 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012731s
	[INFO] 10.244.0.22:53157 - 12726 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088011s
	[INFO] 10.244.0.22:34132 - 14793 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000551582s
	[INFO] 10.244.0.22:43725 - 58678 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000638649s
	[INFO] 10.244.0.26:60064 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000444713s
	[INFO] 10.244.0.26:39631 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010703s
	
	
	==> describe nodes <==
	Name:               addons-483094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-483094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=addons-483094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T18_40_06_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-483094
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:40:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-483094
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 18:48:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 18:45:43 +0000   Sun, 18 Aug 2024 18:40:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 18:45:43 +0000   Sun, 18 Aug 2024 18:40:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 18:45:43 +0000   Sun, 18 Aug 2024 18:40:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 18:45:43 +0000   Sun, 18 Aug 2024 18:40:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.116
	  Hostname:    addons-483094
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c624aa30b43d468f84f32a98bc7e0ee9
	  System UUID:                c624aa30-b43d-468f-84f3-2a98bc7e0ee9
	  Boot ID:                    7d57615f-d260-4095-8c5f-a74965ea1b0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  default                     hello-world-app-55bf9c44b4-lvkpg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 coredns-6f6b679f8f-qghrl                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m55s
	  kube-system                 etcd-addons-483094                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m
	  kube-system                 kube-apiserver-addons-483094             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-controller-manager-addons-483094    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-proxy-79skb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 kube-scheduler-addons-483094             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 metrics-server-8988944d9-77bnz           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m50s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m52s                kube-proxy       
	  Normal  NodeAllocatableEnforced  8m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m6s (x8 over 8m7s)  kubelet          Node addons-483094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m6s (x8 over 8m7s)  kubelet          Node addons-483094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m6s (x7 over 8m7s)  kubelet          Node addons-483094 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m                   kubelet          Node addons-483094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m                   kubelet          Node addons-483094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m                   kubelet          Node addons-483094 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m59s                kubelet          Node addons-483094 status is now: NodeReady
	  Normal  RegisteredNode           7m56s                node-controller  Node addons-483094 event: Registered Node addons-483094 in Controller
	
	
	==> dmesg <==
	[  +5.633595] kauditd_printk_skb: 133 callbacks suppressed
	[  +6.352680] kauditd_printk_skb: 74 callbacks suppressed
	[ +27.226948] kauditd_printk_skb: 4 callbacks suppressed
	[Aug18 18:41] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.713600] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.292546] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.044806] kauditd_printk_skb: 48 callbacks suppressed
	[  +7.934926] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.319163] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.728430] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.371445] kauditd_printk_skb: 45 callbacks suppressed
	[Aug18 18:42] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.820558] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.209896] kauditd_printk_skb: 35 callbacks suppressed
	[  +5.027760] kauditd_printk_skb: 102 callbacks suppressed
	[  +5.128333] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.040213] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.045408] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.433523] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.196847] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.407353] kauditd_printk_skb: 17 callbacks suppressed
	[Aug18 18:43] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.356618] kauditd_printk_skb: 33 callbacks suppressed
	[Aug18 18:45] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.504785] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [255d041856e8437bcf18da245f2fb56fbdd607b7d8177d5e988729e7b7b7f341] <==
	{"level":"info","ts":"2024-08-18T18:41:17.760422Z","caller":"traceutil/trace.go:171","msg":"trace[852162146] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"308.010897ms","start":"2024-08-18T18:41:17.452400Z","end":"2024-08-18T18:41:17.760411Z","steps":["trace[852162146] 'range keys from in-memory index tree'  (duration: 307.857731ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:17.760449Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T18:41:17.452368Z","time spent":"308.073687ms","remote":"127.0.0.1:37098","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-18T18:41:17.760619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.849494ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:1823"}
	{"level":"info","ts":"2024-08-18T18:41:17.760658Z","caller":"traceutil/trace.go:171","msg":"trace[403678913] range","detail":"{range_begin:/registry/secrets/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:1032; }","duration":"305.888818ms","start":"2024-08-18T18:41:17.454762Z","end":"2024-08-18T18:41:17.760651Z","steps":["trace[403678913] 'range keys from in-memory index tree'  (duration: 305.751083ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:17.760682Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T18:41:17.454733Z","time spent":"305.945202ms","remote":"127.0.0.1:37024","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":1847,"request content":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" "}
	{"level":"warn","ts":"2024-08-18T18:41:17.760924Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.309915ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/ingress-nginx/ingress-nginx-admission-patch-gfzr8.17ece6c1ab10df48\" ","response":"range_response_count:1 size:783"}
	{"level":"info","ts":"2024-08-18T18:41:17.760975Z","caller":"traceutil/trace.go:171","msg":"trace[161565522] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-admission-patch-gfzr8.17ece6c1ab10df48; range_end:; response_count:1; response_revision:1032; }","duration":"241.360645ms","start":"2024-08-18T18:41:17.519605Z","end":"2024-08-18T18:41:17.760965Z","steps":["trace[161565522] 'range keys from in-memory index tree'  (duration: 241.129175ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:17.761080Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.713355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:41:17.761114Z","caller":"traceutil/trace.go:171","msg":"trace[565680782] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"196.749319ms","start":"2024-08-18T18:41:17.564360Z","end":"2024-08-18T18:41:17.761109Z","steps":["trace[565680782] 'range keys from in-memory index tree'  (duration: 196.651111ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:17.761622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.079197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:41:17.761649Z","caller":"traceutil/trace.go:171","msg":"trace[436122815] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"105.109395ms","start":"2024-08-18T18:41:17.656532Z","end":"2024-08-18T18:41:17.761641Z","steps":["trace[436122815] 'range keys from in-memory index tree'  (duration: 105.029605ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:28.736449Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.561385ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:41:28.737257Z","caller":"traceutil/trace.go:171","msg":"trace[1647100959] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1122; }","duration":"173.333674ms","start":"2024-08-18T18:41:28.563866Z","end":"2024-08-18T18:41:28.737200Z","steps":["trace[1647100959] 'range keys from in-memory index tree'  (duration: 172.517087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:28.736619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.643525ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T18:41:28.737384Z","caller":"traceutil/trace.go:171","msg":"trace[148679581] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1122; }","duration":"285.419489ms","start":"2024-08-18T18:41:28.451960Z","end":"2024-08-18T18:41:28.737379Z","steps":["trace[148679581] 'range keys from in-memory index tree'  (duration: 284.554286ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:41:28.736785Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.178223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-08-18T18:41:28.737512Z","caller":"traceutil/trace.go:171","msg":"trace[1213203713] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1122; }","duration":"164.906881ms","start":"2024-08-18T18:41:28.572600Z","end":"2024-08-18T18:41:28.737507Z","steps":["trace[1213203713] 'range keys from in-memory index tree'  (duration: 164.088648ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:41:37.144465Z","caller":"traceutil/trace.go:171","msg":"trace[356874224] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"116.877275ms","start":"2024-08-18T18:41:37.027574Z","end":"2024-08-18T18:41:37.144452Z","steps":["trace[356874224] 'process raft request'  (duration: 116.526044ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T18:42:26.947756Z","caller":"traceutil/trace.go:171","msg":"trace[126730107] transaction","detail":"{read_only:false; response_revision:1540; number_of_response:1; }","duration":"393.26166ms","start":"2024-08-18T18:42:26.554465Z","end":"2024-08-18T18:42:26.947726Z","steps":["trace[126730107] 'process raft request'  (duration: 392.977838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:42:26.947981Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T18:42:26.554447Z","time spent":"393.405303ms","remote":"127.0.0.1:37154","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-nnk7ey7brwatdk3wj4ugvhz7wi\" mod_revision:1401 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-nnk7ey7brwatdk3wj4ugvhz7wi\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-nnk7ey7brwatdk3wj4ugvhz7wi\" > >"}
	{"level":"info","ts":"2024-08-18T18:42:45.584135Z","caller":"traceutil/trace.go:171","msg":"trace[1559525544] linearizableReadLoop","detail":"{readStateIndex:1719; appliedIndex:1718; }","duration":"289.679135ms","start":"2024-08-18T18:42:45.294430Z","end":"2024-08-18T18:42:45.584109Z","steps":["trace[1559525544] 'read index received'  (duration: 289.479438ms)","trace[1559525544] 'applied index is now lower than readState.Index'  (duration: 199.202µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T18:42:45.585409Z","caller":"traceutil/trace.go:171","msg":"trace[728233129] transaction","detail":"{read_only:false; response_revision:1662; number_of_response:1; }","duration":"352.683922ms","start":"2024-08-18T18:42:45.232708Z","end":"2024-08-18T18:42:45.585392Z","steps":["trace[728233129] 'process raft request'  (duration: 351.263269ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T18:42:45.585520Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T18:42:45.232691Z","time spent":"352.771605ms","remote":"127.0.0.1:37098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":10117,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/gadget/gadget-zx5jn\" mod_revision:1659 > success:<request_put:<key:\"/registry/pods/gadget/gadget-zx5jn\" value_size:10075 >> failure:<request_range:<key:\"/registry/pods/gadget/gadget-zx5jn\" > >"}
	{"level":"warn","ts":"2024-08-18T18:42:45.586034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"291.59923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-08-18T18:42:45.586094Z","caller":"traceutil/trace.go:171","msg":"trace[1730483542] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1662; }","duration":"291.657059ms","start":"2024-08-18T18:42:45.294425Z","end":"2024-08-18T18:42:45.586082Z","steps":["trace[1730483542] 'agreement among raft nodes before linearized reading'  (duration: 291.527783ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:48:05 up 8 min,  0 users,  load average: 0.06, 0.54, 0.44
	Linux addons-483094 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [30ace9fbe7263af78a8616bf313b1c1b1331c16006584ae12b3a319e3117c637] <==
	E0818 18:42:06.555357       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.4.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.4.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.4.186:443: connect: connection refused" logger="UnhandledError"
	E0818 18:42:06.561072       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.4.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.4.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.4.186:443: connect: connection refused" logger="UnhandledError"
	I0818 18:42:06.636915       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0818 18:42:21.034507       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.190.186"}
	E0818 18:42:36.342822       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0818 18:42:44.504777       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0818 18:42:45.558951       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0818 18:42:49.983965       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0818 18:42:50.185654       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.200.243"}
	I0818 18:42:55.609512       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0818 18:43:29.144860       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.154122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0818 18:43:29.179025       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.179084       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0818 18:43:29.194278       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.194333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0818 18:43:29.298255       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.298303       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0818 18:43:29.316795       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0818 18:43:29.316846       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0818 18:43:30.299293       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0818 18:43:30.318053       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0818 18:43:30.323444       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0818 18:45:13.316976       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.146.166"}
	E0818 18:45:15.888653       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [ffb1c08f476fee201be325e51d6dd357cb4ffdf3dcf12352e7e9700e74fdd255] <==
	W0818 18:45:51.115365       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:45:51.115521       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:46:08.229123       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:46:08.229369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:46:19.102402       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:46:19.102467       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:46:19.294506       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:46:19.294649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:46:23.380450       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:46:23.380571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:46:53.612928       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:46:53.613025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:47:01.921400       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:47:01.921483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:47:17.846714       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:47:17.846775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:47:18.188786       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:47:18.188843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:47:32.902021       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:47:32.902075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:47:49.861387       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:47:49.861444       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0818 18:48:00.645760       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0818 18:48:00.645827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0818 18:48:04.225946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="10.228µs"
	
	
	==> kube-proxy [2f4a9b244767a536f7abb876ae15c52fe9995eb1655f1aa1b4f3475afdbc9ffd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:40:12.452866       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:40:12.482550       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.116"]
	E0818 18:40:12.482650       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:40:12.662688       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:40:12.662722       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:40:12.662747       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:40:12.693363       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:40:12.693599       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:40:12.693609       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:40:12.697820       1 config.go:197] "Starting service config controller"
	I0818 18:40:12.697835       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:40:12.697858       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:40:12.697861       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:40:12.705509       1 config.go:326] "Starting node config controller"
	I0818 18:40:12.705520       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:40:12.798349       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 18:40:12.798421       1 shared_informer.go:320] Caches are synced for service config
	I0818 18:40:12.805799       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3e6be4f20fdef5ef151c25af146e8eb28a5292a85b91d1e01b54a10e00f99326] <==
	E0818 18:40:02.748262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0818 18:40:02.748272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:02.747665       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 18:40:02.748377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:02.748460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 18:40:02.748486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:02.748526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 18:40:02.748565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.607413       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 18:40:03.607470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.648418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 18:40:03.648470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.675587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0818 18:40:03.675605       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 18:40:03.675858       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	E0818 18:40:03.675726       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.753930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 18:40:03.753995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.841479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 18:40:03.841544       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.917118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 18:40:03.917175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 18:40:03.970769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 18:40:03.970834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0818 18:40:06.837458       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 18:47:25 addons-483094 kubelet[1236]: E0818 18:47:25.659271    1236 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006845658836143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:47:35 addons-483094 kubelet[1236]: E0818 18:47:35.661835    1236 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006855661363429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:47:35 addons-483094 kubelet[1236]: E0818 18:47:35.661931    1236 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006855661363429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:47:45 addons-483094 kubelet[1236]: E0818 18:47:45.664869    1236 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006865664573522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:47:45 addons-483094 kubelet[1236]: E0818 18:47:45.664913    1236 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006865664573522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:47:55 addons-483094 kubelet[1236]: E0818 18:47:55.668143    1236 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006875667748987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:47:55 addons-483094 kubelet[1236]: E0818 18:47:55.668598    1236 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006875667748987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:48:04 addons-483094 kubelet[1236]: I0818 18:48:04.256992    1236 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-lvkpg" podStartSLOduration=168.737155279 podStartE2EDuration="2m51.256963971s" podCreationTimestamp="2024-08-18 18:45:13 +0000 UTC" firstStartedPulling="2024-08-18 18:45:13.732386957 +0000 UTC m=+308.536454610" lastFinishedPulling="2024-08-18 18:45:16.252195646 +0000 UTC m=+311.056263302" observedRunningTime="2024-08-18 18:45:17.185573462 +0000 UTC m=+311.989641134" watchObservedRunningTime="2024-08-18 18:48:04.256963971 +0000 UTC m=+479.061031639"
	Aug 18 18:48:05 addons-483094 kubelet[1236]: E0818 18:48:05.407317    1236 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 18:48:05 addons-483094 kubelet[1236]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 18:48:05 addons-483094 kubelet[1236]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 18:48:05 addons-483094 kubelet[1236]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 18:48:05 addons-483094 kubelet[1236]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 18:48:05 addons-483094 kubelet[1236]: E0818 18:48:05.672309    1236 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006885671877578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:48:05 addons-483094 kubelet[1236]: E0818 18:48:05.672357    1236 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724006885671877578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:593723,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 18:48:05 addons-483094 kubelet[1236]: I0818 18:48:05.731867    1236 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nd5k\" (UniqueName: \"kubernetes.io/projected/2aab5d03-7625-4a01-841b-830c70fa8ee2-kube-api-access-9nd5k\") pod \"2aab5d03-7625-4a01-841b-830c70fa8ee2\" (UID: \"2aab5d03-7625-4a01-841b-830c70fa8ee2\") "
	Aug 18 18:48:05 addons-483094 kubelet[1236]: I0818 18:48:05.731936    1236 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2aab5d03-7625-4a01-841b-830c70fa8ee2-tmp-dir\") pod \"2aab5d03-7625-4a01-841b-830c70fa8ee2\" (UID: \"2aab5d03-7625-4a01-841b-830c70fa8ee2\") "
	Aug 18 18:48:05 addons-483094 kubelet[1236]: I0818 18:48:05.732351    1236 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2aab5d03-7625-4a01-841b-830c70fa8ee2-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "2aab5d03-7625-4a01-841b-830c70fa8ee2" (UID: "2aab5d03-7625-4a01-841b-830c70fa8ee2"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 18 18:48:05 addons-483094 kubelet[1236]: I0818 18:48:05.739709    1236 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aab5d03-7625-4a01-841b-830c70fa8ee2-kube-api-access-9nd5k" (OuterVolumeSpecName: "kube-api-access-9nd5k") pod "2aab5d03-7625-4a01-841b-830c70fa8ee2" (UID: "2aab5d03-7625-4a01-841b-830c70fa8ee2"). InnerVolumeSpecName "kube-api-access-9nd5k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 18 18:48:05 addons-483094 kubelet[1236]: I0818 18:48:05.827393    1236 scope.go:117] "RemoveContainer" containerID="7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82"
	Aug 18 18:48:05 addons-483094 kubelet[1236]: I0818 18:48:05.832412    1236 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9nd5k\" (UniqueName: \"kubernetes.io/projected/2aab5d03-7625-4a01-841b-830c70fa8ee2-kube-api-access-9nd5k\") on node \"addons-483094\" DevicePath \"\""
	Aug 18 18:48:05 addons-483094 kubelet[1236]: I0818 18:48:05.832431    1236 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/2aab5d03-7625-4a01-841b-830c70fa8ee2-tmp-dir\") on node \"addons-483094\" DevicePath \"\""
	Aug 18 18:48:05 addons-483094 kubelet[1236]: I0818 18:48:05.872441    1236 scope.go:117] "RemoveContainer" containerID="7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82"
	Aug 18 18:48:05 addons-483094 kubelet[1236]: E0818 18:48:05.873402    1236 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82\": container with ID starting with 7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82 not found: ID does not exist" containerID="7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82"
	Aug 18 18:48:05 addons-483094 kubelet[1236]: I0818 18:48:05.873438    1236 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82"} err="failed to get container status \"7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82\": rpc error: code = NotFound desc = could not find container \"7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82\": container with ID starting with 7c224479822a8916e6cd2bff8653ba4916013c36d8cec5b3c3e5dbd7a22b3a82 not found: ID does not exist"
	
	
	==> storage-provisioner [554234cfc6381bbe54622ab9df0f65d637b60bbca63b81dae3e883c7fba3bb26] <==
	I0818 18:40:19.152194       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 18:40:19.169287       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 18:40:19.169388       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 18:40:19.207059       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 18:40:19.209496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-483094_1a00a840-75be-41ec-aa14-5fa0dc2b943c!
	I0818 18:40:19.227319       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62d92a9c-391b-4d52-87c5-30c4c554cd9b", APIVersion:"v1", ResourceVersion:"758", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-483094_1a00a840-75be-41ec-aa14-5fa0dc2b943c became leader
	I0818 18:40:19.309848       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-483094_1a00a840-75be-41ec-aa14-5fa0dc2b943c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-483094 -n addons-483094
helpers_test.go:261: (dbg) Run:  kubectl --context addons-483094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (327.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-483094
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-483094: exit status 82 (2m0.462231944s)

                                                
                                                
-- stdout --
	* Stopping node "addons-483094"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-483094" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-483094
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-483094: exit status 11 (21.607175149s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.116:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-483094" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-483094
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-483094: exit status 11 (6.140448493s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.116:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-483094" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-483094
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-483094: exit status 11 (6.144236367s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.116:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-483094" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image load --daemon kicbase/echo-server:functional-159278 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 image load --daemon kicbase/echo-server:functional-159278 --alsologtostderr: (2.922964097s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 image ls: (2.36214923s)
functional_test.go:446: expected "kicbase/echo-server:functional-159278" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 node stop m02 -v=7 --alsologtostderr
E0818 19:00:48.584662   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:01:44.019093   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:02:10.506496   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.440273662s)

                                                
                                                
-- stdout --
	* Stopping node "ha-189125-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:00:41.878137   29686 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:00:41.878276   29686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:00:41.878285   29686 out.go:358] Setting ErrFile to fd 2...
	I0818 19:00:41.878290   29686 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:00:41.878475   29686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:00:41.878716   29686 mustload.go:65] Loading cluster: ha-189125
	I0818 19:00:41.879138   29686 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:00:41.879160   29686 stop.go:39] StopHost: ha-189125-m02
	I0818 19:00:41.879649   29686 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:00:41.879697   29686 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:00:41.895401   29686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
	I0818 19:00:41.895967   29686 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:00:41.896564   29686 main.go:141] libmachine: Using API Version  1
	I0818 19:00:41.896588   29686 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:00:41.896960   29686 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:00:41.899451   29686 out.go:177] * Stopping node "ha-189125-m02"  ...
	I0818 19:00:41.900684   29686 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 19:00:41.900730   29686 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 19:00:41.901039   29686 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 19:00:41.901073   29686 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 19:00:41.904138   29686 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:00:41.904566   29686 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:00:41.904598   29686 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:00:41.904722   29686 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 19:00:41.904910   29686 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 19:00:41.905073   29686 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 19:00:41.905263   29686 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 19:00:41.987558   29686 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0818 19:00:42.044622   29686 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0818 19:00:42.081845   29686 main.go:141] libmachine: Stopping "ha-189125-m02"...
	I0818 19:00:42.081885   29686 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:00:42.083575   29686 main.go:141] libmachine: (ha-189125-m02) Calling .Stop
	I0818 19:00:42.087334   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 0/120
	I0818 19:00:43.088898   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 1/120
	I0818 19:00:44.090184   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 2/120
	I0818 19:00:45.091622   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 3/120
	I0818 19:00:46.093760   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 4/120
	I0818 19:00:47.095603   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 5/120
	I0818 19:00:48.097929   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 6/120
	I0818 19:00:49.099212   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 7/120
	I0818 19:00:50.100589   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 8/120
	I0818 19:00:51.101946   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 9/120
	I0818 19:00:52.104230   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 10/120
	I0818 19:00:53.106029   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 11/120
	I0818 19:00:54.107338   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 12/120
	I0818 19:00:55.109358   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 13/120
	I0818 19:00:56.110756   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 14/120
	I0818 19:00:57.112650   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 15/120
	I0818 19:00:58.114054   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 16/120
	I0818 19:00:59.115469   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 17/120
	I0818 19:01:00.116860   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 18/120
	I0818 19:01:01.118055   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 19/120
	I0818 19:01:02.120192   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 20/120
	I0818 19:01:03.121675   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 21/120
	I0818 19:01:04.123082   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 22/120
	I0818 19:01:05.124398   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 23/120
	I0818 19:01:06.125652   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 24/120
	I0818 19:01:07.127534   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 25/120
	I0818 19:01:08.129224   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 26/120
	I0818 19:01:09.130811   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 27/120
	I0818 19:01:10.132519   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 28/120
	I0818 19:01:11.134166   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 29/120
	I0818 19:01:12.136218   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 30/120
	I0818 19:01:13.138300   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 31/120
	I0818 19:01:14.140453   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 32/120
	I0818 19:01:15.141816   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 33/120
	I0818 19:01:16.143230   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 34/120
	I0818 19:01:17.145447   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 35/120
	I0818 19:01:18.146852   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 36/120
	I0818 19:01:19.148231   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 37/120
	I0818 19:01:20.149484   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 38/120
	I0818 19:01:21.150745   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 39/120
	I0818 19:01:22.152899   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 40/120
	I0818 19:01:23.154270   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 41/120
	I0818 19:01:24.155756   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 42/120
	I0818 19:01:25.157762   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 43/120
	I0818 19:01:26.159580   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 44/120
	I0818 19:01:27.161125   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 45/120
	I0818 19:01:28.162315   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 46/120
	I0818 19:01:29.163590   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 47/120
	I0818 19:01:30.165736   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 48/120
	I0818 19:01:31.167025   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 49/120
	I0818 19:01:32.169010   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 50/120
	I0818 19:01:33.170472   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 51/120
	I0818 19:01:34.171989   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 52/120
	I0818 19:01:35.173425   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 53/120
	I0818 19:01:36.174876   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 54/120
	I0818 19:01:37.176696   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 55/120
	I0818 19:01:38.178949   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 56/120
	I0818 19:01:39.180377   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 57/120
	I0818 19:01:40.181967   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 58/120
	I0818 19:01:41.183458   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 59/120
	I0818 19:01:42.184748   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 60/120
	I0818 19:01:43.186033   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 61/120
	I0818 19:01:44.187369   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 62/120
	I0818 19:01:45.188648   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 63/120
	I0818 19:01:46.190257   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 64/120
	I0818 19:01:47.192211   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 65/120
	I0818 19:01:48.193564   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 66/120
	I0818 19:01:49.195003   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 67/120
	I0818 19:01:50.196529   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 68/120
	I0818 19:01:51.197929   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 69/120
	I0818 19:01:52.199942   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 70/120
	I0818 19:01:53.201941   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 71/120
	I0818 19:01:54.203346   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 72/120
	I0818 19:01:55.204757   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 73/120
	I0818 19:01:56.207023   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 74/120
	I0818 19:01:57.209302   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 75/120
	I0818 19:01:58.210627   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 76/120
	I0818 19:01:59.211928   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 77/120
	I0818 19:02:00.213471   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 78/120
	I0818 19:02:01.214670   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 79/120
	I0818 19:02:02.216803   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 80/120
	I0818 19:02:03.218276   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 81/120
	I0818 19:02:04.220207   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 82/120
	I0818 19:02:05.222398   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 83/120
	I0818 19:02:06.223804   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 84/120
	I0818 19:02:07.225503   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 85/120
	I0818 19:02:08.226910   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 86/120
	I0818 19:02:09.228457   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 87/120
	I0818 19:02:10.229956   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 88/120
	I0818 19:02:11.231230   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 89/120
	I0818 19:02:12.233315   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 90/120
	I0818 19:02:13.234824   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 91/120
	I0818 19:02:14.236314   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 92/120
	I0818 19:02:15.237778   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 93/120
	I0818 19:02:16.239280   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 94/120
	I0818 19:02:17.240824   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 95/120
	I0818 19:02:18.242393   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 96/120
	I0818 19:02:19.244452   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 97/120
	I0818 19:02:20.245847   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 98/120
	I0818 19:02:21.247084   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 99/120
	I0818 19:02:22.248424   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 100/120
	I0818 19:02:23.249893   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 101/120
	I0818 19:02:24.251119   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 102/120
	I0818 19:02:25.252463   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 103/120
	I0818 19:02:26.253655   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 104/120
	I0818 19:02:27.255640   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 105/120
	I0818 19:02:28.257730   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 106/120
	I0818 19:02:29.258953   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 107/120
	I0818 19:02:30.260297   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 108/120
	I0818 19:02:31.261995   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 109/120
	I0818 19:02:32.264175   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 110/120
	I0818 19:02:33.265515   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 111/120
	I0818 19:02:34.266889   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 112/120
	I0818 19:02:35.268140   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 113/120
	I0818 19:02:36.269348   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 114/120
	I0818 19:02:37.271612   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 115/120
	I0818 19:02:38.273738   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 116/120
	I0818 19:02:39.275329   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 117/120
	I0818 19:02:40.276631   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 118/120
	I0818 19:02:41.277925   29686 main.go:141] libmachine: (ha-189125-m02) Waiting for machine to stop 119/120
	I0818 19:02:42.278827   29686 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0818 19:02:42.278956   29686 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-189125 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 3 (19.249659379s)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-189125-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:02:42.322040   30116 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:02:42.322293   30116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:02:42.322302   30116 out.go:358] Setting ErrFile to fd 2...
	I0818 19:02:42.322306   30116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:02:42.322479   30116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:02:42.322647   30116 out.go:352] Setting JSON to false
	I0818 19:02:42.322672   30116 mustload.go:65] Loading cluster: ha-189125
	I0818 19:02:42.322800   30116 notify.go:220] Checking for updates...
	I0818 19:02:42.323012   30116 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:02:42.323029   30116 status.go:255] checking status of ha-189125 ...
	I0818 19:02:42.323443   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:02:42.323495   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:02:42.342212   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0818 19:02:42.342649   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:02:42.343167   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:02:42.343222   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:02:42.343743   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:02:42.343949   30116 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:02:42.345592   30116 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:02:42.345616   30116 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:02:42.345910   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:02:42.345959   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:02:42.362126   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I0818 19:02:42.362526   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:02:42.362962   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:02:42.362986   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:02:42.363305   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:02:42.363519   30116 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:02:42.366329   30116 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:02:42.366747   30116 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:02:42.366781   30116 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:02:42.366911   30116 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:02:42.367216   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:02:42.367257   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:02:42.381457   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36447
	I0818 19:02:42.381876   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:02:42.382350   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:02:42.382377   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:02:42.382634   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:02:42.382790   30116 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:02:42.382967   30116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:02:42.382986   30116 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:02:42.385386   30116 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:02:42.385786   30116 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:02:42.385819   30116 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:02:42.385917   30116 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:02:42.386087   30116 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:02:42.386229   30116 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:02:42.386371   30116 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:02:42.468823   30116 ssh_runner.go:195] Run: systemctl --version
	I0818 19:02:42.476497   30116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:02:42.495050   30116 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:02:42.495085   30116 api_server.go:166] Checking apiserver status ...
	I0818 19:02:42.495128   30116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:02:42.515219   30116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0818 19:02:42.527762   30116 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:02:42.527832   30116 ssh_runner.go:195] Run: ls
	I0818 19:02:42.532357   30116 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:02:42.539816   30116 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:02:42.539837   30116 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:02:42.539846   30116 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:02:42.539877   30116 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:02:42.540243   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:02:42.540284   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:02:42.555127   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40609
	I0818 19:02:42.555605   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:02:42.556107   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:02:42.556129   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:02:42.556448   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:02:42.556635   30116 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:02:42.558199   30116 status.go:330] ha-189125-m02 host status = "Running" (err=<nil>)
	I0818 19:02:42.558212   30116 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:02:42.558529   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:02:42.558561   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:02:42.573282   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0818 19:02:42.573727   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:02:42.574201   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:02:42.574227   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:02:42.574524   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:02:42.574692   30116 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 19:02:42.577128   30116 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:02:42.577574   30116 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:02:42.577599   30116 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:02:42.577733   30116 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:02:42.578164   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:02:42.578203   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:02:42.592672   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37255
	I0818 19:02:42.593116   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:02:42.593545   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:02:42.593569   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:02:42.593877   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:02:42.594053   30116 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 19:02:42.594224   30116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:02:42.594248   30116 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 19:02:42.597160   30116 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:02:42.597583   30116 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:02:42.597604   30116 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:02:42.597754   30116 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 19:02:42.597919   30116 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 19:02:42.598074   30116 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 19:02:42.598201   30116 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	W0818 19:03:01.151562   30116 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:01.151685   30116 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0818 19:03:01.151707   30116 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:01.151721   30116 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 19:03:01.151743   30116 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:01.151750   30116 status.go:255] checking status of ha-189125-m03 ...
	I0818 19:03:01.152188   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:01.152240   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:01.168463   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I0818 19:03:01.168891   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:01.169403   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:03:01.169434   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:01.169797   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:01.169987   30116 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:03:01.171695   30116 status.go:330] ha-189125-m03 host status = "Running" (err=<nil>)
	I0818 19:03:01.171710   30116 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:01.171988   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:01.172018   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:01.186305   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0818 19:03:01.186653   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:01.187065   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:03:01.187093   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:01.187365   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:01.187568   30116 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 19:03:01.190191   30116 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:01.190564   30116 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:01.190592   30116 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:01.190702   30116 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:01.191000   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:01.191036   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:01.206031   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39991
	I0818 19:03:01.206407   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:01.206878   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:03:01.206897   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:01.207196   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:01.207423   30116 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:03:01.207590   30116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:01.207608   30116 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:03:01.210260   30116 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:01.210663   30116 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:01.210699   30116 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:01.210806   30116 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:03:01.210963   30116 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:03:01.211100   30116 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:03:01.211206   30116 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:03:01.301283   30116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:01.323089   30116 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:01.323115   30116 api_server.go:166] Checking apiserver status ...
	I0818 19:03:01.323150   30116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:01.340379   30116 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0818 19:03:01.350466   30116 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:01.350517   30116 ssh_runner.go:195] Run: ls
	I0818 19:03:01.355292   30116 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:01.360926   30116 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:01.360958   30116 status.go:422] ha-189125-m03 apiserver status = Running (err=<nil>)
	I0818 19:03:01.360967   30116 status.go:257] ha-189125-m03 status: &{Name:ha-189125-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:01.360982   30116 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:03:01.361366   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:01.361402   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:01.376519   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0818 19:03:01.377159   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:01.377657   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:03:01.377679   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:01.378007   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:01.378189   30116 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:03:01.379909   30116 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:03:01.379926   30116 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:01.380210   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:01.380264   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:01.396319   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37115
	I0818 19:03:01.396756   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:01.397236   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:03:01.397259   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:01.397586   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:01.397767   30116 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:03:01.400981   30116 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:01.401452   30116 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:01.401478   30116 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:01.401689   30116 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:01.402055   30116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:01.402114   30116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:01.417263   30116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35235
	I0818 19:03:01.417758   30116 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:01.418292   30116 main.go:141] libmachine: Using API Version  1
	I0818 19:03:01.418315   30116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:01.418617   30116 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:01.418788   30116 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:03:01.418977   30116 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:01.419006   30116 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:03:01.421907   30116 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:01.422350   30116 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:01.422389   30116 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:01.422578   30116 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:03:01.422739   30116 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:03:01.422869   30116 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:03:01.422979   30116 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:03:01.509224   30116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:01.528345   30116 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-189125 -n ha-189125
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-189125 logs -n 25: (1.466943877s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3256308944/001/cp-test_ha-189125-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125:/home/docker/cp-test_ha-189125-m03_ha-189125.txt                       |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125 sudo cat                                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125.txt                                 |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m02:/home/docker/cp-test_ha-189125-m03_ha-189125-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m02 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04:/home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m04 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp testdata/cp-test.txt                                                | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3256308944/001/cp-test_ha-189125-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125:/home/docker/cp-test_ha-189125-m04_ha-189125.txt                       |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125 sudo cat                                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125.txt                                 |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m02:/home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m02 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03:/home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m03 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-189125 node stop m02 -v=7                                                     | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:55:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:55:16.832717   25471 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:55:16.832945   25471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:55:16.832952   25471 out.go:358] Setting ErrFile to fd 2...
	I0818 18:55:16.832957   25471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:55:16.833133   25471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 18:55:16.833656   25471 out.go:352] Setting JSON to false
	I0818 18:55:16.834453   25471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2261,"bootTime":1724005056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:55:16.834502   25471 start.go:139] virtualization: kvm guest
	I0818 18:55:16.836466   25471 out.go:177] * [ha-189125] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 18:55:16.837827   25471 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:55:16.837833   25471 notify.go:220] Checking for updates...
	I0818 18:55:16.840203   25471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:55:16.841388   25471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:55:16.842493   25471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:55:16.843652   25471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 18:55:16.844817   25471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:55:16.846129   25471 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:55:16.880645   25471 out.go:177] * Using the kvm2 driver based on user configuration
	I0818 18:55:16.881721   25471 start.go:297] selected driver: kvm2
	I0818 18:55:16.881739   25471 start.go:901] validating driver "kvm2" against <nil>
	I0818 18:55:16.881750   25471 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:55:16.882417   25471 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:55:16.882488   25471 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 18:55:16.897244   25471 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 18:55:16.897295   25471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:55:16.897485   25471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:55:16.897557   25471 cni.go:84] Creating CNI manager for ""
	I0818 18:55:16.897568   25471 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0818 18:55:16.897573   25471 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0818 18:55:16.897619   25471 start.go:340] cluster config:
	{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0818 18:55:16.897705   25471 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:55:16.899614   25471 out.go:177] * Starting "ha-189125" primary control-plane node in "ha-189125" cluster
	I0818 18:55:16.900764   25471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:55:16.900805   25471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 18:55:16.900830   25471 cache.go:56] Caching tarball of preloaded images
	I0818 18:55:16.900937   25471 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 18:55:16.900948   25471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 18:55:16.901329   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:55:16.901358   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json: {Name:mk37ad2e33452381b7bc2ec4f6729509252ed83d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:16.901517   25471 start.go:360] acquireMachinesLock for ha-189125: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 18:55:16.901556   25471 start.go:364] duration metric: took 20.868µs to acquireMachinesLock for "ha-189125"
	I0818 18:55:16.901574   25471 start.go:93] Provisioning new machine with config: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:55:16.901634   25471 start.go:125] createHost starting for "" (driver="kvm2")
	I0818 18:55:16.903091   25471 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 18:55:16.903200   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:55:16.903232   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:55:16.917286   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0818 18:55:16.917669   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:55:16.918149   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:55:16.918169   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:55:16.918479   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:55:16.918662   25471 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 18:55:16.918795   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:16.918981   25471 start.go:159] libmachine.API.Create for "ha-189125" (driver="kvm2")
	I0818 18:55:16.919010   25471 client.go:168] LocalClient.Create starting
	I0818 18:55:16.919035   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 18:55:16.919068   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:55:16.919086   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:55:16.919145   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 18:55:16.919164   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:55:16.919178   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:55:16.919193   25471 main.go:141] libmachine: Running pre-create checks...
	I0818 18:55:16.919200   25471 main.go:141] libmachine: (ha-189125) Calling .PreCreateCheck
	I0818 18:55:16.919587   25471 main.go:141] libmachine: (ha-189125) Calling .GetConfigRaw
	I0818 18:55:16.919935   25471 main.go:141] libmachine: Creating machine...
	I0818 18:55:16.919947   25471 main.go:141] libmachine: (ha-189125) Calling .Create
	I0818 18:55:16.920053   25471 main.go:141] libmachine: (ha-189125) Creating KVM machine...
	I0818 18:55:16.921268   25471 main.go:141] libmachine: (ha-189125) DBG | found existing default KVM network
	I0818 18:55:16.921919   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:16.921778   25494 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0818 18:55:16.921937   25471 main.go:141] libmachine: (ha-189125) DBG | created network xml: 
	I0818 18:55:16.921950   25471 main.go:141] libmachine: (ha-189125) DBG | <network>
	I0818 18:55:16.921962   25471 main.go:141] libmachine: (ha-189125) DBG |   <name>mk-ha-189125</name>
	I0818 18:55:16.921976   25471 main.go:141] libmachine: (ha-189125) DBG |   <dns enable='no'/>
	I0818 18:55:16.921982   25471 main.go:141] libmachine: (ha-189125) DBG |   
	I0818 18:55:16.922010   25471 main.go:141] libmachine: (ha-189125) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0818 18:55:16.922031   25471 main.go:141] libmachine: (ha-189125) DBG |     <dhcp>
	I0818 18:55:16.922057   25471 main.go:141] libmachine: (ha-189125) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0818 18:55:16.922068   25471 main.go:141] libmachine: (ha-189125) DBG |     </dhcp>
	I0818 18:55:16.922078   25471 main.go:141] libmachine: (ha-189125) DBG |   </ip>
	I0818 18:55:16.922085   25471 main.go:141] libmachine: (ha-189125) DBG |   
	I0818 18:55:16.922097   25471 main.go:141] libmachine: (ha-189125) DBG | </network>
	I0818 18:55:16.922110   25471 main.go:141] libmachine: (ha-189125) DBG | 
	I0818 18:55:16.927287   25471 main.go:141] libmachine: (ha-189125) DBG | trying to create private KVM network mk-ha-189125 192.168.39.0/24...
	I0818 18:55:16.988469   25471 main.go:141] libmachine: (ha-189125) DBG | private KVM network mk-ha-189125 192.168.39.0/24 created
	I0818 18:55:16.988518   25471 main.go:141] libmachine: (ha-189125) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125 ...
	I0818 18:55:16.988537   25471 main.go:141] libmachine: (ha-189125) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:55:16.988550   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:16.988436   25494 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:55:16.988631   25471 main.go:141] libmachine: (ha-189125) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 18:55:17.226147   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:17.226036   25494 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa...
	I0818 18:55:17.511195   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:17.511048   25494 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/ha-189125.rawdisk...
	I0818 18:55:17.511222   25471 main.go:141] libmachine: (ha-189125) DBG | Writing magic tar header
	I0818 18:55:17.511232   25471 main.go:141] libmachine: (ha-189125) DBG | Writing SSH key tar header
	I0818 18:55:17.511240   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:17.511170   25494 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125 ...
	I0818 18:55:17.511305   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125
	I0818 18:55:17.511333   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125 (perms=drwx------)
	I0818 18:55:17.511358   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 18:55:17.511372   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 18:55:17.511412   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 18:55:17.511432   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:55:17.511464   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 18:55:17.511480   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 18:55:17.511489   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 18:55:17.511502   25471 main.go:141] libmachine: (ha-189125) Creating domain...
	I0818 18:55:17.511522   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 18:55:17.511535   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 18:55:17.511541   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins
	I0818 18:55:17.511549   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home
	I0818 18:55:17.511556   25471 main.go:141] libmachine: (ha-189125) DBG | Skipping /home - not owner
	I0818 18:55:17.512592   25471 main.go:141] libmachine: (ha-189125) define libvirt domain using xml: 
	I0818 18:55:17.512616   25471 main.go:141] libmachine: (ha-189125) <domain type='kvm'>
	I0818 18:55:17.512626   25471 main.go:141] libmachine: (ha-189125)   <name>ha-189125</name>
	I0818 18:55:17.512638   25471 main.go:141] libmachine: (ha-189125)   <memory unit='MiB'>2200</memory>
	I0818 18:55:17.512650   25471 main.go:141] libmachine: (ha-189125)   <vcpu>2</vcpu>
	I0818 18:55:17.512660   25471 main.go:141] libmachine: (ha-189125)   <features>
	I0818 18:55:17.512668   25471 main.go:141] libmachine: (ha-189125)     <acpi/>
	I0818 18:55:17.512678   25471 main.go:141] libmachine: (ha-189125)     <apic/>
	I0818 18:55:17.512686   25471 main.go:141] libmachine: (ha-189125)     <pae/>
	I0818 18:55:17.512705   25471 main.go:141] libmachine: (ha-189125)     
	I0818 18:55:17.512726   25471 main.go:141] libmachine: (ha-189125)   </features>
	I0818 18:55:17.512740   25471 main.go:141] libmachine: (ha-189125)   <cpu mode='host-passthrough'>
	I0818 18:55:17.512746   25471 main.go:141] libmachine: (ha-189125)   
	I0818 18:55:17.512755   25471 main.go:141] libmachine: (ha-189125)   </cpu>
	I0818 18:55:17.512763   25471 main.go:141] libmachine: (ha-189125)   <os>
	I0818 18:55:17.512774   25471 main.go:141] libmachine: (ha-189125)     <type>hvm</type>
	I0818 18:55:17.512785   25471 main.go:141] libmachine: (ha-189125)     <boot dev='cdrom'/>
	I0818 18:55:17.512792   25471 main.go:141] libmachine: (ha-189125)     <boot dev='hd'/>
	I0818 18:55:17.512798   25471 main.go:141] libmachine: (ha-189125)     <bootmenu enable='no'/>
	I0818 18:55:17.512804   25471 main.go:141] libmachine: (ha-189125)   </os>
	I0818 18:55:17.512809   25471 main.go:141] libmachine: (ha-189125)   <devices>
	I0818 18:55:17.512819   25471 main.go:141] libmachine: (ha-189125)     <disk type='file' device='cdrom'>
	I0818 18:55:17.512846   25471 main.go:141] libmachine: (ha-189125)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/boot2docker.iso'/>
	I0818 18:55:17.512868   25471 main.go:141] libmachine: (ha-189125)       <target dev='hdc' bus='scsi'/>
	I0818 18:55:17.512879   25471 main.go:141] libmachine: (ha-189125)       <readonly/>
	I0818 18:55:17.512883   25471 main.go:141] libmachine: (ha-189125)     </disk>
	I0818 18:55:17.512892   25471 main.go:141] libmachine: (ha-189125)     <disk type='file' device='disk'>
	I0818 18:55:17.512900   25471 main.go:141] libmachine: (ha-189125)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 18:55:17.512917   25471 main.go:141] libmachine: (ha-189125)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/ha-189125.rawdisk'/>
	I0818 18:55:17.512925   25471 main.go:141] libmachine: (ha-189125)       <target dev='hda' bus='virtio'/>
	I0818 18:55:17.512931   25471 main.go:141] libmachine: (ha-189125)     </disk>
	I0818 18:55:17.512945   25471 main.go:141] libmachine: (ha-189125)     <interface type='network'>
	I0818 18:55:17.512958   25471 main.go:141] libmachine: (ha-189125)       <source network='mk-ha-189125'/>
	I0818 18:55:17.512972   25471 main.go:141] libmachine: (ha-189125)       <model type='virtio'/>
	I0818 18:55:17.512980   25471 main.go:141] libmachine: (ha-189125)     </interface>
	I0818 18:55:17.512985   25471 main.go:141] libmachine: (ha-189125)     <interface type='network'>
	I0818 18:55:17.512990   25471 main.go:141] libmachine: (ha-189125)       <source network='default'/>
	I0818 18:55:17.512994   25471 main.go:141] libmachine: (ha-189125)       <model type='virtio'/>
	I0818 18:55:17.512999   25471 main.go:141] libmachine: (ha-189125)     </interface>
	I0818 18:55:17.513003   25471 main.go:141] libmachine: (ha-189125)     <serial type='pty'>
	I0818 18:55:17.513008   25471 main.go:141] libmachine: (ha-189125)       <target port='0'/>
	I0818 18:55:17.513012   25471 main.go:141] libmachine: (ha-189125)     </serial>
	I0818 18:55:17.513017   25471 main.go:141] libmachine: (ha-189125)     <console type='pty'>
	I0818 18:55:17.513023   25471 main.go:141] libmachine: (ha-189125)       <target type='serial' port='0'/>
	I0818 18:55:17.513031   25471 main.go:141] libmachine: (ha-189125)     </console>
	I0818 18:55:17.513042   25471 main.go:141] libmachine: (ha-189125)     <rng model='virtio'>
	I0818 18:55:17.513052   25471 main.go:141] libmachine: (ha-189125)       <backend model='random'>/dev/random</backend>
	I0818 18:55:17.513059   25471 main.go:141] libmachine: (ha-189125)     </rng>
	I0818 18:55:17.513066   25471 main.go:141] libmachine: (ha-189125)     
	I0818 18:55:17.513071   25471 main.go:141] libmachine: (ha-189125)     
	I0818 18:55:17.513075   25471 main.go:141] libmachine: (ha-189125)   </devices>
	I0818 18:55:17.513079   25471 main.go:141] libmachine: (ha-189125) </domain>
	I0818 18:55:17.513086   25471 main.go:141] libmachine: (ha-189125) 
	I0818 18:55:17.516836   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:be:c8:bc in network default
	I0818 18:55:17.517392   25471 main.go:141] libmachine: (ha-189125) Ensuring networks are active...
	I0818 18:55:17.517417   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:17.517999   25471 main.go:141] libmachine: (ha-189125) Ensuring network default is active
	I0818 18:55:17.518309   25471 main.go:141] libmachine: (ha-189125) Ensuring network mk-ha-189125 is active
	I0818 18:55:17.518725   25471 main.go:141] libmachine: (ha-189125) Getting domain xml...
	I0818 18:55:17.519345   25471 main.go:141] libmachine: (ha-189125) Creating domain...
	I0818 18:55:18.708441   25471 main.go:141] libmachine: (ha-189125) Waiting to get IP...
	I0818 18:55:18.709297   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:18.709695   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:18.709727   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:18.709674   25494 retry.go:31] will retry after 206.092137ms: waiting for machine to come up
	I0818 18:55:18.916995   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:18.917414   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:18.917448   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:18.917370   25494 retry.go:31] will retry after 385.757474ms: waiting for machine to come up
	I0818 18:55:19.304852   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:19.305282   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:19.305310   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:19.305235   25494 retry.go:31] will retry after 462.930892ms: waiting for machine to come up
	I0818 18:55:19.769936   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:19.770312   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:19.770334   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:19.770283   25494 retry.go:31] will retry after 474.206876ms: waiting for machine to come up
	I0818 18:55:20.246010   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:20.246434   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:20.246462   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:20.246383   25494 retry.go:31] will retry after 554.966147ms: waiting for machine to come up
	I0818 18:55:20.803186   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:20.803667   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:20.803702   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:20.803601   25494 retry.go:31] will retry after 691.96919ms: waiting for machine to come up
	I0818 18:55:21.497609   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:21.498099   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:21.498130   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:21.498068   25494 retry.go:31] will retry after 1.121268882s: waiting for machine to come up
	I0818 18:55:22.620829   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:22.621298   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:22.621324   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:22.621247   25494 retry.go:31] will retry after 1.211418408s: waiting for machine to come up
	I0818 18:55:23.834734   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:23.835096   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:23.835133   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:23.835054   25494 retry.go:31] will retry after 1.210290747s: waiting for machine to come up
	I0818 18:55:25.047326   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:25.047678   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:25.047707   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:25.047626   25494 retry.go:31] will retry after 2.136992489s: waiting for machine to come up
	I0818 18:55:27.185755   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:27.186178   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:27.186204   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:27.186110   25494 retry.go:31] will retry after 2.212172863s: waiting for machine to come up
	I0818 18:55:29.399454   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:29.399875   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:29.399912   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:29.399826   25494 retry.go:31] will retry after 2.265404223s: waiting for machine to come up
	I0818 18:55:31.666568   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:31.666935   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:31.666964   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:31.666892   25494 retry.go:31] will retry after 4.302632484s: waiting for machine to come up
	I0818 18:55:35.973932   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:35.974308   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:35.974333   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:35.974266   25494 retry.go:31] will retry after 3.43667283s: waiting for machine to come up
	I0818 18:55:39.412726   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.413154   25471 main.go:141] libmachine: (ha-189125) Found IP for machine: 192.168.39.49
	I0818 18:55:39.413170   25471 main.go:141] libmachine: (ha-189125) Reserving static IP address...
	I0818 18:55:39.413182   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has current primary IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.413644   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find host DHCP lease matching {name: "ha-189125", mac: "52:54:00:e9:51:81", ip: "192.168.39.49"} in network mk-ha-189125
	I0818 18:55:39.481998   25471 main.go:141] libmachine: (ha-189125) DBG | Getting to WaitForSSH function...
	I0818 18:55:39.482030   25471 main.go:141] libmachine: (ha-189125) Reserved static IP address: 192.168.39.49
	I0818 18:55:39.482048   25471 main.go:141] libmachine: (ha-189125) Waiting for SSH to be available...
	I0818 18:55:39.484453   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.484849   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.484872   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.485012   25471 main.go:141] libmachine: (ha-189125) DBG | Using SSH client type: external
	I0818 18:55:39.485033   25471 main.go:141] libmachine: (ha-189125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa (-rw-------)
	I0818 18:55:39.485151   25471 main.go:141] libmachine: (ha-189125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 18:55:39.485169   25471 main.go:141] libmachine: (ha-189125) DBG | About to run SSH command:
	I0818 18:55:39.485188   25471 main.go:141] libmachine: (ha-189125) DBG | exit 0
	I0818 18:55:39.607190   25471 main.go:141] libmachine: (ha-189125) DBG | SSH cmd err, output: <nil>: 
	I0818 18:55:39.607480   25471 main.go:141] libmachine: (ha-189125) KVM machine creation complete!
	I0818 18:55:39.607826   25471 main.go:141] libmachine: (ha-189125) Calling .GetConfigRaw
	I0818 18:55:39.608369   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:39.608527   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:39.608663   25471 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 18:55:39.608680   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:55:39.609760   25471 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 18:55:39.609773   25471 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 18:55:39.609778   25471 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 18:55:39.609783   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:39.612219   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.612570   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.612596   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.612715   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:39.612889   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.613042   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.613175   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:39.613338   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:39.613570   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:39.613586   25471 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 18:55:39.710361   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:55:39.710385   25471 main.go:141] libmachine: Detecting the provisioner...
	I0818 18:55:39.710396   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:39.713049   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.713345   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.713368   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.713532   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:39.713705   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.713861   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.713980   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:39.714219   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:39.714463   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:39.714478   25471 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 18:55:39.811866   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 18:55:39.811938   25471 main.go:141] libmachine: found compatible host: buildroot
	I0818 18:55:39.811948   25471 main.go:141] libmachine: Provisioning with buildroot...
	I0818 18:55:39.811955   25471 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 18:55:39.812198   25471 buildroot.go:166] provisioning hostname "ha-189125"
	I0818 18:55:39.812220   25471 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 18:55:39.812401   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:39.814672   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.814994   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.815021   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.815148   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:39.815329   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.815496   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.815623   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:39.815770   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:39.815955   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:39.815973   25471 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-189125 && echo "ha-189125" | sudo tee /etc/hostname
	I0818 18:55:39.929682   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125
	
	I0818 18:55:39.929712   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:39.932326   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.932689   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.932711   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.932837   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:39.933010   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.933143   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.933248   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:39.933393   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:39.933569   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:39.933590   25471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-189125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-189125/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-189125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:55:40.040891   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:55:40.040919   25471 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 18:55:40.040975   25471 buildroot.go:174] setting up certificates
	I0818 18:55:40.040991   25471 provision.go:84] configureAuth start
	I0818 18:55:40.041007   25471 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 18:55:40.041264   25471 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 18:55:40.044223   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.044514   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.044537   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.044671   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.046879   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.047190   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.047224   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.047362   25471 provision.go:143] copyHostCerts
	I0818 18:55:40.047405   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:55:40.047449   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 18:55:40.047466   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:55:40.047547   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 18:55:40.047671   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:55:40.047700   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 18:55:40.047714   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:55:40.047755   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 18:55:40.047834   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:55:40.047857   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 18:55:40.047867   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:55:40.047905   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 18:55:40.047985   25471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.ha-189125 san=[127.0.0.1 192.168.39.49 ha-189125 localhost minikube]
	I0818 18:55:40.137859   25471 provision.go:177] copyRemoteCerts
	I0818 18:55:40.137907   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:55:40.137937   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.140484   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.140822   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.140846   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.141020   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.141217   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.141356   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.141490   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:55:40.221683   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 18:55:40.221748   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 18:55:40.246144   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 18:55:40.246221   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 18:55:40.270891   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 18:55:40.270950   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 18:55:40.294407   25471 provision.go:87] duration metric: took 253.403083ms to configureAuth
	I0818 18:55:40.294429   25471 buildroot.go:189] setting minikube options for container-runtime
	I0818 18:55:40.294570   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:55:40.294631   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.297201   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.297647   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.297683   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.297866   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.298046   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.298204   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.298385   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.298535   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:40.298693   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:40.298714   25471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 18:55:40.552081   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 18:55:40.552127   25471 main.go:141] libmachine: Checking connection to Docker...
	I0818 18:55:40.552134   25471 main.go:141] libmachine: (ha-189125) Calling .GetURL
	I0818 18:55:40.553429   25471 main.go:141] libmachine: (ha-189125) DBG | Using libvirt version 6000000
	I0818 18:55:40.555606   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.555907   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.555930   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.556075   25471 main.go:141] libmachine: Docker is up and running!
	I0818 18:55:40.556091   25471 main.go:141] libmachine: Reticulating splines...
	I0818 18:55:40.556099   25471 client.go:171] duration metric: took 23.637082284s to LocalClient.Create
	I0818 18:55:40.556123   25471 start.go:167] duration metric: took 23.637142268s to libmachine.API.Create "ha-189125"
	I0818 18:55:40.556130   25471 start.go:293] postStartSetup for "ha-189125" (driver="kvm2")
	I0818 18:55:40.556140   25471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:55:40.556164   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.556362   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:55:40.556384   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.558396   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.558652   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.558676   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.558751   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.558911   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.559052   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.559167   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:55:40.637386   25471 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:55:40.642028   25471 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 18:55:40.642047   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 18:55:40.642111   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 18:55:40.642192   25471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 18:55:40.642205   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 18:55:40.642323   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 18:55:40.651801   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:55:40.678851   25471 start.go:296] duration metric: took 122.709599ms for postStartSetup
	I0818 18:55:40.678900   25471 main.go:141] libmachine: (ha-189125) Calling .GetConfigRaw
	I0818 18:55:40.679466   25471 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 18:55:40.681984   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.682315   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.682362   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.682583   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:55:40.682768   25471 start.go:128] duration metric: took 23.781124031s to createHost
	I0818 18:55:40.682793   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.684715   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.684964   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.684991   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.685094   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.685280   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.685436   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.685582   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.685742   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:40.685898   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:40.685910   25471 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 18:55:40.784180   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007340.760671490
	
	I0818 18:55:40.784203   25471 fix.go:216] guest clock: 1724007340.760671490
	I0818 18:55:40.784213   25471 fix.go:229] Guest: 2024-08-18 18:55:40.76067149 +0000 UTC Remote: 2024-08-18 18:55:40.682779935 +0000 UTC m=+23.887777007 (delta=77.891555ms)
	I0818 18:55:40.784237   25471 fix.go:200] guest clock delta is within tolerance: 77.891555ms
	I0818 18:55:40.784243   25471 start.go:83] releasing machines lock for "ha-189125", held for 23.882677576s
	I0818 18:55:40.784261   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.784488   25471 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 18:55:40.786870   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.787148   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.787181   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.787307   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.787790   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.787958   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.788045   25471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:55:40.788083   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.788208   25471 ssh_runner.go:195] Run: cat /version.json
	I0818 18:55:40.788233   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.790599   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.790807   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.790879   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.790909   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.791036   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.791181   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.791195   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.791197   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.791334   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.791407   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.791548   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.791544   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:55:40.791656   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.791776   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:55:40.864592   25471 ssh_runner.go:195] Run: systemctl --version
	I0818 18:55:40.890693   25471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 18:55:41.052400   25471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 18:55:41.058445   25471 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 18:55:41.058527   25471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:55:41.074831   25471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 18:55:41.074857   25471 start.go:495] detecting cgroup driver to use...
	I0818 18:55:41.074927   25471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 18:55:41.091671   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:55:41.108653   25471 docker.go:217] disabling cri-docker service (if available) ...
	I0818 18:55:41.108714   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 18:55:41.122060   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 18:55:41.135284   25471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 18:55:41.251804   25471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 18:55:41.416163   25471 docker.go:233] disabling docker service ...
	I0818 18:55:41.416252   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 18:55:41.430940   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 18:55:41.443776   25471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 18:55:41.565375   25471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 18:55:41.695008   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 18:55:41.708805   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:55:41.726948   25471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 18:55:41.727005   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.736547   25471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 18:55:41.736622   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.746391   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.755878   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.765834   25471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:55:41.775713   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.785050   25471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.801478   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.810894   25471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:55:41.819551   25471 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 18:55:41.819604   25471 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 18:55:41.831737   25471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:55:41.842090   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:55:41.966114   25471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 18:55:42.104549   25471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 18:55:42.104617   25471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 18:55:42.109616   25471 start.go:563] Will wait 60s for crictl version
	I0818 18:55:42.109673   25471 ssh_runner.go:195] Run: which crictl
	I0818 18:55:42.113520   25471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:55:42.153776   25471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 18:55:42.153850   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:55:42.181340   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:55:42.211132   25471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 18:55:42.212527   25471 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 18:55:42.215214   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:42.215615   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:42.215644   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:42.215829   25471 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 18:55:42.220002   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:55:42.232820   25471 kubeadm.go:883] updating cluster {Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 18:55:42.232909   25471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:55:42.232951   25471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:55:42.265128   25471 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 18:55:42.265194   25471 ssh_runner.go:195] Run: which lz4
	I0818 18:55:42.269025   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0818 18:55:42.269130   25471 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 18:55:42.273218   25471 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 18:55:42.273249   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 18:55:43.595544   25471 crio.go:462] duration metric: took 1.326438024s to copy over tarball
	I0818 18:55:43.595612   25471 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 18:55:45.624453   25471 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.028819366s)
	I0818 18:55:45.624479   25471 crio.go:469] duration metric: took 2.028909373s to extract the tarball
	I0818 18:55:45.624486   25471 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 18:55:45.661892   25471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:55:45.704692   25471 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 18:55:45.704716   25471 cache_images.go:84] Images are preloaded, skipping loading
	I0818 18:55:45.704725   25471 kubeadm.go:934] updating node { 192.168.39.49 8443 v1.31.0 crio true true} ...
	I0818 18:55:45.704841   25471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-189125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:55:45.704904   25471 ssh_runner.go:195] Run: crio config
	I0818 18:55:45.753433   25471 cni.go:84] Creating CNI manager for ""
	I0818 18:55:45.753451   25471 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0818 18:55:45.753460   25471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 18:55:45.753482   25471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.49 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-189125 NodeName:ha-189125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 18:55:45.753619   25471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-189125"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 18:55:45.753640   25471 kube-vip.go:115] generating kube-vip config ...
	I0818 18:55:45.753680   25471 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 18:55:45.769318   25471 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 18:55:45.769457   25471 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0818 18:55:45.769529   25471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:55:45.779319   25471 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 18:55:45.779409   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 18:55:45.789058   25471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0818 18:55:45.806318   25471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:55:45.823264   25471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0818 18:55:45.840624   25471 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0818 18:55:45.857213   25471 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0818 18:55:45.861395   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:55:45.873798   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:55:45.991237   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:55:46.008028   25471 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125 for IP: 192.168.39.49
	I0818 18:55:46.008055   25471 certs.go:194] generating shared ca certs ...
	I0818 18:55:46.008074   25471 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.008264   25471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 18:55:46.008325   25471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 18:55:46.008335   25471 certs.go:256] generating profile certs ...
	I0818 18:55:46.008421   25471 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key
	I0818 18:55:46.008438   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt with IP's: []
	I0818 18:55:46.215007   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt ...
	I0818 18:55:46.215035   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt: {Name:mk60b149cc8b4a83d937fcffc9f8b33d5653340f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.215197   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key ...
	I0818 18:55:46.215208   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key: {Name:mke859b45cac026e257f0afd9ac7d88fa3a8c8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.215287   25471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.8e559455
	I0818 18:55:46.215302   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.8e559455 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.49 192.168.39.254]
	I0818 18:55:46.290985   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.8e559455 ...
	I0818 18:55:46.291013   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.8e559455: {Name:mke54735a227e9f631f593460c369a782702e610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.291156   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.8e559455 ...
	I0818 18:55:46.291175   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.8e559455: {Name:mk0a6642fa814770fc81f492baeea14c00651aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.291245   25471 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.8e559455 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt
	I0818 18:55:46.291323   25471 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.8e559455 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key
	I0818 18:55:46.291397   25471 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key
	I0818 18:55:46.291417   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt with IP's: []
	I0818 18:55:46.434855   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt ...
	I0818 18:55:46.434883   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt: {Name:mkf6e55369f3d420e87f16cc023d112c682ebc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.435029   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key ...
	I0818 18:55:46.435041   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key: {Name:mk26df8f001944899b15a3c943b0263d2ac4c738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.435114   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 18:55:46.435135   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 18:55:46.435148   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 18:55:46.435162   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 18:55:46.435177   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 18:55:46.435190   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 18:55:46.435203   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 18:55:46.435217   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 18:55:46.435264   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 18:55:46.435296   25471 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 18:55:46.435306   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 18:55:46.435328   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 18:55:46.435349   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:55:46.435372   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 18:55:46.435443   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:55:46.435473   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:55:46.435485   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 18:55:46.435498   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 18:55:46.436039   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:55:46.461729   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 18:55:46.484922   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:55:46.507667   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 18:55:46.530575   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 18:55:46.553533   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 18:55:46.576503   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:55:46.599561   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 18:55:46.622811   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:55:46.646083   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 18:55:46.668996   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 18:55:46.691751   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 18:55:46.707948   25471 ssh_runner.go:195] Run: openssl version
	I0818 18:55:46.713703   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:55:46.723993   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:55:46.728627   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:55:46.728687   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:55:46.734384   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:55:46.744236   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 18:55:46.754073   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 18:55:46.758539   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 18:55:46.758577   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 18:55:46.763947   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 18:55:46.776756   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 18:55:46.787133   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 18:55:46.798849   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 18:55:46.798893   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 18:55:46.807702   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 18:55:46.821923   25471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:55:46.829286   25471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 18:55:46.829344   25471 kubeadm.go:392] StartCluster: {Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:55:46.829419   25471 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 18:55:46.829485   25471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 18:55:46.867204   25471 cri.go:89] found id: ""
	I0818 18:55:46.867284   25471 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 18:55:46.877047   25471 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 18:55:46.886645   25471 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 18:55:46.895945   25471 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 18:55:46.895969   25471 kubeadm.go:157] found existing configuration files:
	
	I0818 18:55:46.896022   25471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 18:55:46.905063   25471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 18:55:46.905127   25471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 18:55:46.914364   25471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 18:55:46.922916   25471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 18:55:46.922973   25471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 18:55:46.932232   25471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 18:55:46.940809   25471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 18:55:46.940871   25471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 18:55:46.949854   25471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 18:55:46.959016   25471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 18:55:46.959065   25471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 18:55:46.968021   25471 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 18:55:47.066601   25471 kubeadm.go:310] W0818 18:55:47.044986     857 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:55:47.067020   25471 kubeadm.go:310] W0818 18:55:47.046006     857 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:55:47.182288   25471 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 18:55:58.150958   25471 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 18:55:58.151022   25471 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 18:55:58.151115   25471 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 18:55:58.151230   25471 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 18:55:58.151364   25471 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 18:55:58.151477   25471 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 18:55:58.152947   25471 out.go:235]   - Generating certificates and keys ...
	I0818 18:55:58.153024   25471 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 18:55:58.153081   25471 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 18:55:58.153137   25471 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0818 18:55:58.153208   25471 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0818 18:55:58.153286   25471 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0818 18:55:58.153337   25471 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0818 18:55:58.153388   25471 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0818 18:55:58.153498   25471 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-189125 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	I0818 18:55:58.153558   25471 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0818 18:55:58.153695   25471 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-189125 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	I0818 18:55:58.153774   25471 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0818 18:55:58.153828   25471 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0818 18:55:58.153873   25471 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0818 18:55:58.153920   25471 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 18:55:58.153965   25471 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 18:55:58.154013   25471 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 18:55:58.154064   25471 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 18:55:58.154118   25471 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 18:55:58.154172   25471 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 18:55:58.154267   25471 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 18:55:58.154330   25471 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 18:55:58.155738   25471 out.go:235]   - Booting up control plane ...
	I0818 18:55:58.155813   25471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 18:55:58.155881   25471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 18:55:58.155940   25471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 18:55:58.156038   25471 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 18:55:58.156124   25471 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 18:55:58.156161   25471 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 18:55:58.156304   25471 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 18:55:58.156415   25471 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 18:55:58.156471   25471 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001537715s
	I0818 18:55:58.156531   25471 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 18:55:58.156595   25471 kubeadm.go:310] [api-check] The API server is healthy after 5.648292247s
	I0818 18:55:58.156762   25471 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 18:55:58.156912   25471 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 18:55:58.156979   25471 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 18:55:58.157229   25471 kubeadm.go:310] [mark-control-plane] Marking the node ha-189125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 18:55:58.157294   25471 kubeadm.go:310] [bootstrap-token] Using token: aoujqn.tyz3etdztt4uivkk
	I0818 18:55:58.158504   25471 out.go:235]   - Configuring RBAC rules ...
	I0818 18:55:58.158635   25471 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 18:55:58.158736   25471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 18:55:58.158903   25471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 18:55:58.159049   25471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 18:55:58.159158   25471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 18:55:58.159242   25471 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 18:55:58.159370   25471 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 18:55:58.159445   25471 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 18:55:58.159514   25471 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 18:55:58.159524   25471 kubeadm.go:310] 
	I0818 18:55:58.159603   25471 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 18:55:58.159612   25471 kubeadm.go:310] 
	I0818 18:55:58.159723   25471 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 18:55:58.159734   25471 kubeadm.go:310] 
	I0818 18:55:58.159768   25471 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 18:55:58.159842   25471 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 18:55:58.159912   25471 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 18:55:58.159922   25471 kubeadm.go:310] 
	I0818 18:55:58.159996   25471 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 18:55:58.160005   25471 kubeadm.go:310] 
	I0818 18:55:58.160062   25471 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 18:55:58.160073   25471 kubeadm.go:310] 
	I0818 18:55:58.160150   25471 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 18:55:58.160270   25471 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 18:55:58.160362   25471 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 18:55:58.160373   25471 kubeadm.go:310] 
	I0818 18:55:58.160484   25471 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 18:55:58.160595   25471 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 18:55:58.160606   25471 kubeadm.go:310] 
	I0818 18:55:58.160725   25471 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token aoujqn.tyz3etdztt4uivkk \
	I0818 18:55:58.160870   25471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 18:55:58.160911   25471 kubeadm.go:310] 	--control-plane 
	I0818 18:55:58.160923   25471 kubeadm.go:310] 
	I0818 18:55:58.161036   25471 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 18:55:58.161044   25471 kubeadm.go:310] 
	I0818 18:55:58.161175   25471 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token aoujqn.tyz3etdztt4uivkk \
	I0818 18:55:58.161323   25471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 18:55:58.161341   25471 cni.go:84] Creating CNI manager for ""
	I0818 18:55:58.161351   25471 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0818 18:55:58.162711   25471 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0818 18:55:58.163822   25471 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0818 18:55:58.169203   25471 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0818 18:55:58.169218   25471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0818 18:55:58.188127   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0818 18:55:58.622841   25471 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 18:55:58.622934   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:55:58.622961   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-189125 minikube.k8s.io/updated_at=2024_08_18T18_55_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=ha-189125 minikube.k8s.io/primary=true
	I0818 18:55:58.669712   25471 ops.go:34] apiserver oom_adj: -16
	I0818 18:55:58.847689   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:55:59.348324   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:55:59.848273   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:00.348616   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:00.848398   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:01.348467   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:01.848101   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:01.957872   25471 kubeadm.go:1113] duration metric: took 3.335030876s to wait for elevateKubeSystemPrivileges
	I0818 18:56:01.957911   25471 kubeadm.go:394] duration metric: took 15.128570088s to StartCluster
	I0818 18:56:01.957932   25471 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:01.958011   25471 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:56:01.959069   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:01.959305   25471 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:56:01.959332   25471 start.go:241] waiting for startup goroutines ...
	I0818 18:56:01.959367   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0818 18:56:01.959349   25471 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 18:56:01.959443   25471 addons.go:69] Setting storage-provisioner=true in profile "ha-189125"
	I0818 18:56:01.959478   25471 addons.go:69] Setting default-storageclass=true in profile "ha-189125"
	I0818 18:56:01.959523   25471 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-189125"
	I0818 18:56:01.959550   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:56:01.959481   25471 addons.go:234] Setting addon storage-provisioner=true in "ha-189125"
	I0818 18:56:01.959623   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:56:01.960014   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.960064   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.960149   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.960186   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.974624   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I0818 18:56:01.974795   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41417
	I0818 18:56:01.975220   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:01.975278   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:01.975728   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:01.975741   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:01.975859   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:01.975884   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:01.976041   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:01.976197   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:01.976216   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:56:01.976789   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.976834   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.978185   25471 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:56:01.978537   25471 kapi.go:59] client config for ha-189125: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key", CAFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 18:56:01.979072   25471 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 18:56:01.979366   25471 addons.go:234] Setting addon default-storageclass=true in "ha-189125"
	I0818 18:56:01.979420   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:56:01.979831   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.979874   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.992481   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0818 18:56:01.992968   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:01.993498   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:01.993523   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:01.993702   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42477
	I0818 18:56:01.993872   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:01.994043   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:56:01.994052   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:01.994550   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:01.994572   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:01.994896   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:01.995476   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.995514   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.996022   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:56:01.998223   25471 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 18:56:01.999531   25471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:56:01.999552   25471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 18:56:01.999571   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:56:02.002114   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:02.002476   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:56:02.002511   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:02.002741   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:56:02.002920   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:56:02.003052   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:56:02.003184   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:56:02.010991   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
	I0818 18:56:02.011365   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:02.011800   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:02.011815   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:02.012069   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:02.012271   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:56:02.013525   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:56:02.013735   25471 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 18:56:02.013748   25471 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 18:56:02.013760   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:56:02.016129   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:02.016506   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:56:02.016533   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:02.016668   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:56:02.016814   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:56:02.016927   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:56:02.017043   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:56:02.088071   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0818 18:56:02.197400   25471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:56:02.245760   25471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 18:56:02.619549   25471 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0818 18:56:02.816515   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.816555   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.816600   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.816622   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.816851   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.816869   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.816879   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.816887   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.816929   25471 main.go:141] libmachine: (ha-189125) DBG | Closing plugin on server side
	I0818 18:56:02.817129   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.817149   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.817162   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.817150   25471 main.go:141] libmachine: (ha-189125) DBG | Closing plugin on server side
	I0818 18:56:02.817177   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.817195   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.817228   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.818528   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.818541   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.818614   25471 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0818 18:56:02.818637   25471 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0818 18:56:02.818726   25471 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0818 18:56:02.818738   25471 round_trippers.go:469] Request Headers:
	I0818 18:56:02.818748   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:56:02.818754   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:56:02.831248   25471 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0818 18:56:02.831902   25471 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0818 18:56:02.831917   25471 round_trippers.go:469] Request Headers:
	I0818 18:56:02.831924   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:56:02.831928   25471 round_trippers.go:473]     Content-Type: application/json
	I0818 18:56:02.831931   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:56:02.834381   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:56:02.834513   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.834524   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.834753   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.834776   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.834792   25471 main.go:141] libmachine: (ha-189125) DBG | Closing plugin on server side
	I0818 18:56:02.836769   25471 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0818 18:56:02.838039   25471 addons.go:510] duration metric: took 878.697589ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0818 18:56:02.838067   25471 start.go:246] waiting for cluster config update ...
	I0818 18:56:02.838086   25471 start.go:255] writing updated cluster config ...
	I0818 18:56:02.839659   25471 out.go:201] 
	I0818 18:56:02.841145   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:56:02.841216   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:56:02.842895   25471 out.go:177] * Starting "ha-189125-m02" control-plane node in "ha-189125" cluster
	I0818 18:56:02.844170   25471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:56:02.844192   25471 cache.go:56] Caching tarball of preloaded images
	I0818 18:56:02.844277   25471 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 18:56:02.844287   25471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 18:56:02.844364   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:56:02.844521   25471 start.go:360] acquireMachinesLock for ha-189125-m02: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 18:56:02.844558   25471 start.go:364] duration metric: took 20.894µs to acquireMachinesLock for "ha-189125-m02"
	I0818 18:56:02.844574   25471 start.go:93] Provisioning new machine with config: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:56:02.844640   25471 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0818 18:56:02.846148   25471 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 18:56:02.846236   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:02.846268   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:02.860808   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
	I0818 18:56:02.861235   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:02.861702   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:02.861721   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:02.861996   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:02.862198   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetMachineName
	I0818 18:56:02.862368   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:02.862553   25471 start.go:159] libmachine.API.Create for "ha-189125" (driver="kvm2")
	I0818 18:56:02.862577   25471 client.go:168] LocalClient.Create starting
	I0818 18:56:02.862602   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 18:56:02.862633   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:56:02.862647   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:56:02.862692   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 18:56:02.862710   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:56:02.862721   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:56:02.862734   25471 main.go:141] libmachine: Running pre-create checks...
	I0818 18:56:02.862741   25471 main.go:141] libmachine: (ha-189125-m02) Calling .PreCreateCheck
	I0818 18:56:02.862917   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetConfigRaw
	I0818 18:56:02.863279   25471 main.go:141] libmachine: Creating machine...
	I0818 18:56:02.863298   25471 main.go:141] libmachine: (ha-189125-m02) Calling .Create
	I0818 18:56:02.863621   25471 main.go:141] libmachine: (ha-189125-m02) Creating KVM machine...
	I0818 18:56:02.864840   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found existing default KVM network
	I0818 18:56:02.865009   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found existing private KVM network mk-ha-189125
	I0818 18:56:02.865153   25471 main.go:141] libmachine: (ha-189125-m02) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02 ...
	I0818 18:56:02.865175   25471 main.go:141] libmachine: (ha-189125-m02) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:56:02.865197   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:02.865122   25839 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:56:02.865310   25471 main.go:141] libmachine: (ha-189125-m02) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 18:56:03.088149   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:03.087993   25839 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa...
	I0818 18:56:03.305944   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:03.305815   25839 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/ha-189125-m02.rawdisk...
	I0818 18:56:03.305967   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Writing magic tar header
	I0818 18:56:03.305977   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Writing SSH key tar header
	I0818 18:56:03.305985   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:03.305935   25839 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02 ...
	I0818 18:56:03.306074   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02
	I0818 18:56:03.306108   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02 (perms=drwx------)
	I0818 18:56:03.306118   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 18:56:03.306129   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 18:56:03.306148   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 18:56:03.306157   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 18:56:03.306168   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 18:56:03.306178   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 18:56:03.306193   25471 main.go:141] libmachine: (ha-189125-m02) Creating domain...
	I0818 18:56:03.306202   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:56:03.306209   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 18:56:03.306220   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 18:56:03.306238   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins
	I0818 18:56:03.306249   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home
	I0818 18:56:03.306261   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Skipping /home - not owner
	I0818 18:56:03.307282   25471 main.go:141] libmachine: (ha-189125-m02) define libvirt domain using xml: 
	I0818 18:56:03.307307   25471 main.go:141] libmachine: (ha-189125-m02) <domain type='kvm'>
	I0818 18:56:03.307318   25471 main.go:141] libmachine: (ha-189125-m02)   <name>ha-189125-m02</name>
	I0818 18:56:03.307334   25471 main.go:141] libmachine: (ha-189125-m02)   <memory unit='MiB'>2200</memory>
	I0818 18:56:03.307347   25471 main.go:141] libmachine: (ha-189125-m02)   <vcpu>2</vcpu>
	I0818 18:56:03.307357   25471 main.go:141] libmachine: (ha-189125-m02)   <features>
	I0818 18:56:03.307368   25471 main.go:141] libmachine: (ha-189125-m02)     <acpi/>
	I0818 18:56:03.307392   25471 main.go:141] libmachine: (ha-189125-m02)     <apic/>
	I0818 18:56:03.307405   25471 main.go:141] libmachine: (ha-189125-m02)     <pae/>
	I0818 18:56:03.307416   25471 main.go:141] libmachine: (ha-189125-m02)     
	I0818 18:56:03.307425   25471 main.go:141] libmachine: (ha-189125-m02)   </features>
	I0818 18:56:03.307435   25471 main.go:141] libmachine: (ha-189125-m02)   <cpu mode='host-passthrough'>
	I0818 18:56:03.307445   25471 main.go:141] libmachine: (ha-189125-m02)   
	I0818 18:56:03.307456   25471 main.go:141] libmachine: (ha-189125-m02)   </cpu>
	I0818 18:56:03.307468   25471 main.go:141] libmachine: (ha-189125-m02)   <os>
	I0818 18:56:03.307476   25471 main.go:141] libmachine: (ha-189125-m02)     <type>hvm</type>
	I0818 18:56:03.307488   25471 main.go:141] libmachine: (ha-189125-m02)     <boot dev='cdrom'/>
	I0818 18:56:03.307503   25471 main.go:141] libmachine: (ha-189125-m02)     <boot dev='hd'/>
	I0818 18:56:03.307515   25471 main.go:141] libmachine: (ha-189125-m02)     <bootmenu enable='no'/>
	I0818 18:56:03.307539   25471 main.go:141] libmachine: (ha-189125-m02)   </os>
	I0818 18:56:03.307552   25471 main.go:141] libmachine: (ha-189125-m02)   <devices>
	I0818 18:56:03.307564   25471 main.go:141] libmachine: (ha-189125-m02)     <disk type='file' device='cdrom'>
	I0818 18:56:03.307597   25471 main.go:141] libmachine: (ha-189125-m02)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/boot2docker.iso'/>
	I0818 18:56:03.307622   25471 main.go:141] libmachine: (ha-189125-m02)       <target dev='hdc' bus='scsi'/>
	I0818 18:56:03.307635   25471 main.go:141] libmachine: (ha-189125-m02)       <readonly/>
	I0818 18:56:03.307645   25471 main.go:141] libmachine: (ha-189125-m02)     </disk>
	I0818 18:56:03.307657   25471 main.go:141] libmachine: (ha-189125-m02)     <disk type='file' device='disk'>
	I0818 18:56:03.307669   25471 main.go:141] libmachine: (ha-189125-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 18:56:03.307686   25471 main.go:141] libmachine: (ha-189125-m02)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/ha-189125-m02.rawdisk'/>
	I0818 18:56:03.307701   25471 main.go:141] libmachine: (ha-189125-m02)       <target dev='hda' bus='virtio'/>
	I0818 18:56:03.307713   25471 main.go:141] libmachine: (ha-189125-m02)     </disk>
	I0818 18:56:03.307723   25471 main.go:141] libmachine: (ha-189125-m02)     <interface type='network'>
	I0818 18:56:03.307735   25471 main.go:141] libmachine: (ha-189125-m02)       <source network='mk-ha-189125'/>
	I0818 18:56:03.307748   25471 main.go:141] libmachine: (ha-189125-m02)       <model type='virtio'/>
	I0818 18:56:03.307760   25471 main.go:141] libmachine: (ha-189125-m02)     </interface>
	I0818 18:56:03.307775   25471 main.go:141] libmachine: (ha-189125-m02)     <interface type='network'>
	I0818 18:56:03.307788   25471 main.go:141] libmachine: (ha-189125-m02)       <source network='default'/>
	I0818 18:56:03.307799   25471 main.go:141] libmachine: (ha-189125-m02)       <model type='virtio'/>
	I0818 18:56:03.307823   25471 main.go:141] libmachine: (ha-189125-m02)     </interface>
	I0818 18:56:03.307834   25471 main.go:141] libmachine: (ha-189125-m02)     <serial type='pty'>
	I0818 18:56:03.307866   25471 main.go:141] libmachine: (ha-189125-m02)       <target port='0'/>
	I0818 18:56:03.307888   25471 main.go:141] libmachine: (ha-189125-m02)     </serial>
	I0818 18:56:03.307901   25471 main.go:141] libmachine: (ha-189125-m02)     <console type='pty'>
	I0818 18:56:03.307912   25471 main.go:141] libmachine: (ha-189125-m02)       <target type='serial' port='0'/>
	I0818 18:56:03.307924   25471 main.go:141] libmachine: (ha-189125-m02)     </console>
	I0818 18:56:03.307934   25471 main.go:141] libmachine: (ha-189125-m02)     <rng model='virtio'>
	I0818 18:56:03.307945   25471 main.go:141] libmachine: (ha-189125-m02)       <backend model='random'>/dev/random</backend>
	I0818 18:56:03.307959   25471 main.go:141] libmachine: (ha-189125-m02)     </rng>
	I0818 18:56:03.307969   25471 main.go:141] libmachine: (ha-189125-m02)     
	I0818 18:56:03.307979   25471 main.go:141] libmachine: (ha-189125-m02)     
	I0818 18:56:03.307987   25471 main.go:141] libmachine: (ha-189125-m02)   </devices>
	I0818 18:56:03.307996   25471 main.go:141] libmachine: (ha-189125-m02) </domain>
	I0818 18:56:03.308012   25471 main.go:141] libmachine: (ha-189125-m02) 
	I0818 18:56:03.315735   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:bf:d4:3e in network default
	I0818 18:56:03.316418   25471 main.go:141] libmachine: (ha-189125-m02) Ensuring networks are active...
	I0818 18:56:03.316447   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:03.317294   25471 main.go:141] libmachine: (ha-189125-m02) Ensuring network default is active
	I0818 18:56:03.317698   25471 main.go:141] libmachine: (ha-189125-m02) Ensuring network mk-ha-189125 is active
	I0818 18:56:03.318186   25471 main.go:141] libmachine: (ha-189125-m02) Getting domain xml...
	I0818 18:56:03.318992   25471 main.go:141] libmachine: (ha-189125-m02) Creating domain...
	I0818 18:56:04.549654   25471 main.go:141] libmachine: (ha-189125-m02) Waiting to get IP...
	I0818 18:56:04.550438   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:04.550841   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:04.550906   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:04.550836   25839 retry.go:31] will retry after 189.70945ms: waiting for machine to come up
	I0818 18:56:04.742242   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:04.742892   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:04.742917   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:04.742851   25839 retry.go:31] will retry after 306.441708ms: waiting for machine to come up
	I0818 18:56:05.051422   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:05.051867   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:05.051894   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:05.051822   25839 retry.go:31] will retry after 309.375385ms: waiting for machine to come up
	I0818 18:56:05.362202   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:05.362738   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:05.362767   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:05.362696   25839 retry.go:31] will retry after 531.292093ms: waiting for machine to come up
	I0818 18:56:05.895365   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:05.895790   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:05.895817   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:05.895741   25839 retry.go:31] will retry after 476.983941ms: waiting for machine to come up
	I0818 18:56:06.374351   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:06.374784   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:06.374814   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:06.374725   25839 retry.go:31] will retry after 760.550106ms: waiting for machine to come up
	I0818 18:56:07.136601   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:07.137029   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:07.137052   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:07.137001   25839 retry.go:31] will retry after 833.085885ms: waiting for machine to come up
	I0818 18:56:07.972109   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:07.972719   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:07.972743   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:07.972679   25839 retry.go:31] will retry after 1.213935964s: waiting for machine to come up
	I0818 18:56:09.188185   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:09.188647   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:09.188676   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:09.188614   25839 retry.go:31] will retry after 1.477368217s: waiting for machine to come up
	I0818 18:56:10.668113   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:10.668564   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:10.668590   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:10.668514   25839 retry.go:31] will retry after 2.1955723s: waiting for machine to come up
	I0818 18:56:12.865446   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:12.865894   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:12.865922   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:12.865849   25839 retry.go:31] will retry after 1.867147502s: waiting for machine to come up
	I0818 18:56:14.734272   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:14.734703   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:14.734732   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:14.734657   25839 retry.go:31] will retry after 2.346085082s: waiting for machine to come up
	I0818 18:56:17.084059   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:17.084444   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:17.084475   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:17.084418   25839 retry.go:31] will retry after 3.612682767s: waiting for machine to come up
	I0818 18:56:20.700361   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:20.700713   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:20.700734   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:20.700687   25839 retry.go:31] will retry after 3.880590162s: waiting for machine to come up
	I0818 18:56:24.583447   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.584008   25471 main.go:141] libmachine: (ha-189125-m02) Found IP for machine: 192.168.39.147
	I0818 18:56:24.584031   25471 main.go:141] libmachine: (ha-189125-m02) Reserving static IP address...
	I0818 18:56:24.584045   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has current primary IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.584474   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find host DHCP lease matching {name: "ha-189125-m02", mac: "52:54:00:a7:f4:4c", ip: "192.168.39.147"} in network mk-ha-189125
	I0818 18:56:24.655647   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Getting to WaitForSSH function...
	I0818 18:56:24.655687   25471 main.go:141] libmachine: (ha-189125-m02) Reserved static IP address: 192.168.39.147
	I0818 18:56:24.655700   25471 main.go:141] libmachine: (ha-189125-m02) Waiting for SSH to be available...
	I0818 18:56:24.658246   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.658606   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:24.658635   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.658782   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Using SSH client type: external
	I0818 18:56:24.658806   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa (-rw-------)
	I0818 18:56:24.658829   25471 main.go:141] libmachine: (ha-189125-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 18:56:24.658839   25471 main.go:141] libmachine: (ha-189125-m02) DBG | About to run SSH command:
	I0818 18:56:24.658850   25471 main.go:141] libmachine: (ha-189125-m02) DBG | exit 0
	I0818 18:56:24.783851   25471 main.go:141] libmachine: (ha-189125-m02) DBG | SSH cmd err, output: <nil>: 
	I0818 18:56:24.784144   25471 main.go:141] libmachine: (ha-189125-m02) KVM machine creation complete!
	I0818 18:56:24.784456   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetConfigRaw
	I0818 18:56:24.784958   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:24.785135   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:24.785312   25471 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 18:56:24.785327   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 18:56:24.786656   25471 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 18:56:24.786669   25471 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 18:56:24.786675   25471 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 18:56:24.786680   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:24.788953   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.789330   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:24.789370   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.789542   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:24.789726   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:24.789897   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:24.790075   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:24.790250   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:24.790448   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:24.790460   25471 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 18:56:24.894553   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:56:24.894576   25471 main.go:141] libmachine: Detecting the provisioner...
	I0818 18:56:24.894600   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:24.897373   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.897739   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:24.897767   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.897909   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:24.898119   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:24.898243   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:24.898374   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:24.898524   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:24.898690   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:24.898963   25471 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 18:56:25.004189   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 18:56:25.004258   25471 main.go:141] libmachine: found compatible host: buildroot
	I0818 18:56:25.004271   25471 main.go:141] libmachine: Provisioning with buildroot...
	I0818 18:56:25.004284   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetMachineName
	I0818 18:56:25.004538   25471 buildroot.go:166] provisioning hostname "ha-189125-m02"
	I0818 18:56:25.004566   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetMachineName
	I0818 18:56:25.004753   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.007197   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.007543   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.007568   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.007762   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.007935   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.008072   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.008219   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.008374   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:25.008550   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:25.008567   25471 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-189125-m02 && echo "ha-189125-m02" | sudo tee /etc/hostname
	I0818 18:56:25.129102   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125-m02
	
	I0818 18:56:25.129132   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.131946   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.132268   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.132304   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.132456   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.132643   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.132782   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.132898   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.133023   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:25.133174   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:25.133188   25471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-189125-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-189125-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-189125-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:56:25.244663   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:56:25.244698   25471 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 18:56:25.244714   25471 buildroot.go:174] setting up certificates
	I0818 18:56:25.244721   25471 provision.go:84] configureAuth start
	I0818 18:56:25.244729   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetMachineName
	I0818 18:56:25.245016   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 18:56:25.247751   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.248104   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.248134   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.248323   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.250652   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.250985   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.251013   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.251128   25471 provision.go:143] copyHostCerts
	I0818 18:56:25.251158   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:56:25.251197   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 18:56:25.251206   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:56:25.251273   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 18:56:25.251345   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:56:25.251362   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 18:56:25.251368   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:56:25.251415   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 18:56:25.251475   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:56:25.251492   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 18:56:25.251498   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:56:25.251521   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 18:56:25.251570   25471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.ha-189125-m02 san=[127.0.0.1 192.168.39.147 ha-189125-m02 localhost minikube]
	I0818 18:56:25.348489   25471 provision.go:177] copyRemoteCerts
	I0818 18:56:25.348544   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:56:25.348565   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.351281   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.351657   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.351684   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.351832   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.352062   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.352236   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.352411   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 18:56:25.433192   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 18:56:25.433263   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 18:56:25.457661   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 18:56:25.457729   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 18:56:25.481448   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 18:56:25.481512   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 18:56:25.506378   25471 provision.go:87] duration metric: took 261.641684ms to configureAuth
	I0818 18:56:25.506402   25471 buildroot.go:189] setting minikube options for container-runtime
	I0818 18:56:25.506577   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:56:25.506654   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.509394   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.509727   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.509748   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.509944   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.510145   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.510350   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.510528   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.510710   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:25.510915   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:25.510932   25471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 18:56:25.780823   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 18:56:25.780847   25471 main.go:141] libmachine: Checking connection to Docker...
	I0818 18:56:25.780858   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetURL
	I0818 18:56:25.782093   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Using libvirt version 6000000
	I0818 18:56:25.784160   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.784520   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.784545   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.784694   25471 main.go:141] libmachine: Docker is up and running!
	I0818 18:56:25.784708   25471 main.go:141] libmachine: Reticulating splines...
	I0818 18:56:25.784714   25471 client.go:171] duration metric: took 22.922131138s to LocalClient.Create
	I0818 18:56:25.784733   25471 start.go:167] duration metric: took 22.92218128s to libmachine.API.Create "ha-189125"
	I0818 18:56:25.784742   25471 start.go:293] postStartSetup for "ha-189125-m02" (driver="kvm2")
	I0818 18:56:25.784751   25471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:56:25.784774   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:25.785002   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:56:25.785025   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.787001   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.787336   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.787358   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.787513   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.787674   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.787823   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.787921   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 18:56:25.870456   25471 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:56:25.874913   25471 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 18:56:25.874936   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 18:56:25.874999   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 18:56:25.875070   25471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 18:56:25.875082   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 18:56:25.875195   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 18:56:25.884827   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:56:25.908555   25471 start.go:296] duration metric: took 123.800351ms for postStartSetup
	I0818 18:56:25.908610   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetConfigRaw
	I0818 18:56:25.909271   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 18:56:25.911557   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.911891   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.911912   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.912185   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:56:25.912355   25471 start.go:128] duration metric: took 23.067706224s to createHost
	I0818 18:56:25.912374   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.914769   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.915089   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.915110   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.915290   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.915475   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.915634   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.915735   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.915859   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:25.916006   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:25.916015   25471 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 18:56:26.020357   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007385.993920413
	
	I0818 18:56:26.020378   25471 fix.go:216] guest clock: 1724007385.993920413
	I0818 18:56:26.020389   25471 fix.go:229] Guest: 2024-08-18 18:56:25.993920413 +0000 UTC Remote: 2024-08-18 18:56:25.912365204 +0000 UTC m=+69.117362276 (delta=81.555209ms)
	I0818 18:56:26.020415   25471 fix.go:200] guest clock delta is within tolerance: 81.555209ms
	I0818 18:56:26.020423   25471 start.go:83] releasing machines lock for "ha-189125-m02", held for 23.175855754s
	I0818 18:56:26.020453   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:26.020678   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 18:56:26.023373   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.023750   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:26.023771   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.025861   25471 out.go:177] * Found network options:
	I0818 18:56:26.027004   25471 out.go:177]   - NO_PROXY=192.168.39.49
	W0818 18:56:26.028085   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 18:56:26.028108   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:26.028609   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:26.028784   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:26.028868   25471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:56:26.028905   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	W0818 18:56:26.028976   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 18:56:26.029055   25471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 18:56:26.029075   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:26.031162   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.031411   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.031559   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:26.031585   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.031718   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:26.031722   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:26.031744   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.031920   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:26.031922   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:26.032129   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:26.032136   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:26.032271   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:26.032370   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 18:56:26.032570   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 18:56:26.266298   25471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 18:56:26.272330   25471 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 18:56:26.272391   25471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:56:26.288956   25471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 18:56:26.288976   25471 start.go:495] detecting cgroup driver to use...
	I0818 18:56:26.289039   25471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 18:56:26.311860   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:56:26.326557   25471 docker.go:217] disabling cri-docker service (if available) ...
	I0818 18:56:26.326620   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 18:56:26.340258   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 18:56:26.354673   25471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 18:56:26.473057   25471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 18:56:26.649348   25471 docker.go:233] disabling docker service ...
	I0818 18:56:26.649425   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 18:56:26.664482   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 18:56:26.677312   25471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 18:56:26.798114   25471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 18:56:26.922521   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 18:56:26.937473   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:56:26.956873   25471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 18:56:26.956927   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:26.967554   25471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 18:56:26.967611   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:26.978405   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:26.989175   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:27.000397   25471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:56:27.011882   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:27.022693   25471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:27.040262   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:27.050444   25471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:56:27.059996   25471 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 18:56:27.060055   25471 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 18:56:27.073043   25471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:56:27.083033   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:56:27.201750   25471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 18:56:27.338450   25471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 18:56:27.338508   25471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 18:56:27.343146   25471 start.go:563] Will wait 60s for crictl version
	I0818 18:56:27.343198   25471 ssh_runner.go:195] Run: which crictl
	I0818 18:56:27.346822   25471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:56:27.386415   25471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 18:56:27.386485   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:56:27.414020   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:56:27.444917   25471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 18:56:27.446412   25471 out.go:177]   - env NO_PROXY=192.168.39.49
	I0818 18:56:27.447903   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 18:56:27.450438   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:27.450780   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:27.450813   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:27.451015   25471 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 18:56:27.455183   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:56:27.469397   25471 mustload.go:65] Loading cluster: ha-189125
	I0818 18:56:27.469602   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:56:27.469905   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:27.469937   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:27.484830   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40151
	I0818 18:56:27.485314   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:27.485898   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:27.485928   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:27.486280   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:27.486456   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:56:27.488234   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:56:27.488577   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:27.488602   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:27.505149   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I0818 18:56:27.505577   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:27.506048   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:27.506067   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:27.506382   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:27.506573   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:56:27.506738   25471 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125 for IP: 192.168.39.147
	I0818 18:56:27.506749   25471 certs.go:194] generating shared ca certs ...
	I0818 18:56:27.506761   25471 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:27.506890   25471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 18:56:27.506946   25471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 18:56:27.506963   25471 certs.go:256] generating profile certs ...
	I0818 18:56:27.507060   25471 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key
	I0818 18:56:27.507093   25471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ccfd3871
	I0818 18:56:27.507115   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ccfd3871 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.49 192.168.39.147 192.168.39.254]
	I0818 18:56:27.776824   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ccfd3871 ...
	I0818 18:56:27.776851   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ccfd3871: {Name:mk693f24e6c521c769dd1a90fa61ded18ba545f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:27.777012   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ccfd3871 ...
	I0818 18:56:27.777025   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ccfd3871: {Name:mk5801ce96a42bd9b95bdbb774232e6a93638a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:27.777103   25471 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ccfd3871 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt
	I0818 18:56:27.777230   25471 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ccfd3871 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key
	I0818 18:56:27.777352   25471 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key
	I0818 18:56:27.777366   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 18:56:27.777378   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 18:56:27.777391   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 18:56:27.777405   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 18:56:27.777417   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 18:56:27.777429   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 18:56:27.777443   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 18:56:27.777455   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 18:56:27.777501   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 18:56:27.777528   25471 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 18:56:27.777538   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 18:56:27.777559   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 18:56:27.777579   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:56:27.777599   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 18:56:27.777634   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:56:27.777660   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:56:27.777673   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 18:56:27.777685   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 18:56:27.777715   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:56:27.780880   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:27.781262   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:56:27.781290   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:27.781490   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:56:27.781664   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:56:27.781829   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:56:27.781920   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:56:27.851820   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 18:56:27.857165   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 18:56:27.868927   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 18:56:27.873073   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0818 18:56:27.888998   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 18:56:27.893623   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 18:56:27.906221   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 18:56:27.911138   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 18:56:27.924038   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 18:56:27.928729   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 18:56:27.939534   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 18:56:27.944256   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 18:56:27.955028   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:56:27.981285   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 18:56:28.005203   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:56:28.029382   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 18:56:28.053677   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0818 18:56:28.077371   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 18:56:28.101987   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:56:28.126475   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 18:56:28.150219   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:56:28.173489   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 18:56:28.197046   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 18:56:28.222079   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 18:56:28.239062   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0818 18:56:28.255936   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 18:56:28.273293   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 18:56:28.289535   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 18:56:28.306186   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 18:56:28.322487   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 18:56:28.340403   25471 ssh_runner.go:195] Run: openssl version
	I0818 18:56:28.346165   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:56:28.357299   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:56:28.362092   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:56:28.362148   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:56:28.368013   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:56:28.379308   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 18:56:28.390732   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 18:56:28.395653   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 18:56:28.395706   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 18:56:28.401551   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 18:56:28.412455   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 18:56:28.423271   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 18:56:28.427896   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 18:56:28.427947   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 18:56:28.433474   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 18:56:28.444044   25471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:56:28.448173   25471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 18:56:28.448242   25471 kubeadm.go:934] updating node {m02 192.168.39.147 8443 v1.31.0 crio true true} ...
	I0818 18:56:28.448354   25471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-189125-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:56:28.448387   25471 kube-vip.go:115] generating kube-vip config ...
	I0818 18:56:28.448421   25471 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 18:56:28.465207   25471 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 18:56:28.465274   25471 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 18:56:28.465320   25471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:56:28.474939   25471 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0818 18:56:28.474993   25471 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0818 18:56:28.484664   25471 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0818 18:56:28.484693   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0818 18:56:28.484749   25471 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0818 18:56:28.484760   25471 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0818 18:56:28.484773   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0818 18:56:28.489593   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0818 18:56:28.489619   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0818 18:57:07.300938   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0818 18:57:07.301041   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0818 18:57:07.306928   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0818 18:57:07.306960   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0818 18:57:21.679905   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:57:21.694904   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0818 18:57:21.694988   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0818 18:57:21.699099   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0818 18:57:21.699128   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0818 18:57:22.023889   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 18:57:22.033513   25471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0818 18:57:22.050257   25471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:57:22.067666   25471 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0818 18:57:22.084525   25471 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0818 18:57:22.088470   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:57:22.102139   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:57:22.228480   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:57:22.244566   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:57:22.244880   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:57:22.244927   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:57:22.260307   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0818 18:57:22.260759   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:57:22.261222   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:57:22.261241   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:57:22.261547   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:57:22.261798   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:57:22.261960   25471 start.go:317] joinCluster: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:57:22.262102   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0818 18:57:22.262126   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:57:22.265153   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:57:22.265644   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:57:22.265672   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:57:22.265880   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:57:22.266035   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:57:22.266211   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:57:22.266348   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:57:22.413322   25471 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:57:22.413420   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lspzfs.e7jiyw0f2vub7bzi --discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-189125-m02 --control-plane --apiserver-advertise-address=192.168.39.147 --apiserver-bind-port=8443"
	I0818 18:57:43.013800   25471 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lspzfs.e7jiyw0f2vub7bzi --discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-189125-m02 --control-plane --apiserver-advertise-address=192.168.39.147 --apiserver-bind-port=8443": (20.600347191s)
	I0818 18:57:43.013834   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0818 18:57:43.519884   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-189125-m02 minikube.k8s.io/updated_at=2024_08_18T18_57_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=ha-189125 minikube.k8s.io/primary=false
	I0818 18:57:43.625984   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-189125-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0818 18:57:43.732295   25471 start.go:319] duration metric: took 21.47033009s to joinCluster
	I0818 18:57:43.732370   25471 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:57:43.732689   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:57:43.733914   25471 out.go:177] * Verifying Kubernetes components...
	I0818 18:57:43.735137   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:57:43.968889   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:57:44.022903   25471 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:57:44.023165   25471 kapi.go:59] client config for ha-189125: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key", CAFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 18:57:44.023229   25471 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.49:8443
	I0818 18:57:44.023499   25471 node_ready.go:35] waiting up to 6m0s for node "ha-189125-m02" to be "Ready" ...
	I0818 18:57:44.023594   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:44.023605   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:44.023615   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:44.023620   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:44.034010   25471 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0818 18:57:44.523644   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:44.523679   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:44.523687   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:44.523693   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:44.527673   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:45.024085   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:45.024106   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:45.024117   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:45.024122   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:45.028184   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:45.524187   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:45.524216   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:45.524227   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:45.524232   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:45.530294   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:57:46.024367   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:46.024391   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:46.024409   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:46.024414   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:46.028288   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:46.028857   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:46.524311   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:46.524333   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:46.524341   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:46.524345   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:46.527565   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:47.024585   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:47.024605   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:47.024613   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:47.024620   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:47.028783   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:47.524314   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:47.524338   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:47.524419   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:47.524435   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:47.527616   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:48.024174   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:48.024194   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:48.024205   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:48.024210   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:48.029579   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:57:48.030278   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:48.524614   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:48.524636   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:48.524645   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:48.524651   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:48.527520   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:57:49.024621   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:49.024645   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:49.024654   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:49.024662   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:49.028758   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:49.524632   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:49.524654   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:49.524665   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:49.524670   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:49.528158   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:50.024633   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:50.024652   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:50.024660   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:50.024665   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:50.028828   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:50.523807   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:50.523827   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:50.523834   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:50.523837   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:50.527173   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:50.528329   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:51.023735   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:51.023760   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:51.023768   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:51.023774   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:51.028945   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:57:51.523733   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:51.523755   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:51.523765   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:51.523771   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:51.529671   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:57:52.024174   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:52.024197   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:52.024207   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:52.024211   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:52.027681   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:52.524093   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:52.524137   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:52.524145   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:52.524149   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:52.526979   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:57:53.024443   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:53.024464   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:53.024472   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:53.024476   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:53.028194   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:53.028983   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:53.524434   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:53.524456   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:53.524465   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:53.524469   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:53.528476   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:54.023723   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:54.023741   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:54.023748   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:54.023752   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:54.027335   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:54.524350   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:54.524376   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:54.524385   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:54.524388   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:54.528240   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:55.024322   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:55.024343   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:55.024351   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:55.024355   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:55.028532   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:55.524451   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:55.524471   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:55.524479   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:55.524483   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:55.528004   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:55.528676   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:56.024036   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:56.024059   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:56.024067   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:56.024071   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:56.026855   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:57:56.524615   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:56.524635   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:56.524643   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:56.524647   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:56.528071   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:57.024053   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:57.024073   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:57.024082   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:57.024088   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:57.027107   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:57.524328   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:57.524346   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:57.524354   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:57.524360   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:57.527464   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:58.023936   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:58.023964   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:58.023974   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:58.023981   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:58.026995   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:57:58.027742   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:58.524035   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:58.524057   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:58.524065   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:58.524068   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:58.527280   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:59.024391   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:59.024412   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:59.024420   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:59.024424   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:59.027594   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:59.524618   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:59.524639   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:59.524651   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:59.524656   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:59.527690   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:00.024689   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:00.024712   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:00.024720   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:00.024724   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:00.027716   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:00.028252   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:58:00.524681   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:00.524704   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:00.524712   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:00.524716   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:00.527895   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:01.023776   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:01.023800   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:01.023807   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:01.023811   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:01.027204   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:01.524199   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:01.524220   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:01.524228   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:01.524232   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:01.527841   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.023989   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:02.024012   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.024020   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.024024   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.027223   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.524496   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:02.524521   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.524532   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.524537   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.527626   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.528030   25471 node_ready.go:49] node "ha-189125-m02" has status "Ready":"True"
	I0818 18:58:02.528046   25471 node_ready.go:38] duration metric: took 18.504530405s for node "ha-189125-m02" to be "Ready" ...
	I0818 18:58:02.528054   25471 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:58:02.528113   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:02.528122   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.528128   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.528132   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.532615   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:02.538126   25471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.538210   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-7xr26
	I0818 18:58:02.538219   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.538227   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.538230   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.542065   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.542785   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.542802   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.542813   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.542820   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.545807   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.546294   25471 pod_ready.go:93] pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.546315   25471 pod_ready.go:82] duration metric: took 8.164461ms for pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.546327   25471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.546395   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q9j97
	I0818 18:58:02.546406   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.546415   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.546434   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.548550   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.549332   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.549348   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.549354   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.549358   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.552328   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.552829   25471 pod_ready.go:93] pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.552845   25471 pod_ready.go:82] duration metric: took 6.508478ms for pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.552853   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.552899   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125
	I0818 18:58:02.552906   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.552912   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.552919   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.555280   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.556026   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.556043   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.556053   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.556059   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.558355   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.558947   25471 pod_ready.go:93] pod "etcd-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.558964   25471 pod_ready.go:82] duration metric: took 6.101918ms for pod "etcd-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.558975   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.559032   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125-m02
	I0818 18:58:02.559041   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.559052   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.559060   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.561242   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.561942   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:02.561959   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.561968   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.561974   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.564135   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.564594   25471 pod_ready.go:93] pod "etcd-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.564610   25471 pod_ready.go:82] duration metric: took 5.626815ms for pod "etcd-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.564627   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.724985   25471 request.go:632] Waited for 160.28756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125
	I0818 18:58:02.725053   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125
	I0818 18:58:02.725059   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.725067   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.725070   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.728106   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.925412   25471 request.go:632] Waited for 196.61739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.925493   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.925500   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.925510   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.925515   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.928304   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.928779   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.928797   25471 pod_ready.go:82] duration metric: took 364.161268ms for pod "kube-apiserver-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.928805   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.124917   25471 request.go:632] Waited for 196.044329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m02
	I0818 18:58:03.124971   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m02
	I0818 18:58:03.124977   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.124987   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.124993   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.128374   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:03.325492   25471 request.go:632] Waited for 196.391258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:03.325554   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:03.325559   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.325565   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.325569   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.329364   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:03.329939   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:03.329956   25471 pod_ready.go:82] duration metric: took 401.144525ms for pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.329964   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.525046   25471 request.go:632] Waited for 195.017553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125
	I0818 18:58:03.525118   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125
	I0818 18:58:03.525123   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.525131   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.525138   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.528377   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:03.725379   25471 request.go:632] Waited for 196.368187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:03.725441   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:03.725446   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.725454   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.725462   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.733361   25471 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0818 18:58:03.734061   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:03.734080   25471 pod_ready.go:82] duration metric: took 404.110264ms for pod "kube-controller-manager-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.734090   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.925125   25471 request.go:632] Waited for 190.960818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m02
	I0818 18:58:03.925202   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m02
	I0818 18:58:03.925208   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.925218   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.925236   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.929714   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:04.124822   25471 request.go:632] Waited for 194.214505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:04.124871   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:04.124876   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.124883   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.124887   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.128296   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:04.129112   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:04.129130   25471 pod_ready.go:82] duration metric: took 395.033443ms for pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.129139   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-96xwx" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.325322   25471 request.go:632] Waited for 196.121065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-96xwx
	I0818 18:58:04.325386   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-96xwx
	I0818 18:58:04.325394   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.325403   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.325408   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.328746   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:04.525063   25471 request.go:632] Waited for 195.35461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:04.525140   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:04.525150   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.525158   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.525162   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.531029   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:58:04.531510   25471 pod_ready.go:93] pod "kube-proxy-96xwx" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:04.531527   25471 pod_ready.go:82] duration metric: took 402.383581ms for pod "kube-proxy-96xwx" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.531538   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-scwlr" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.725579   25471 request.go:632] Waited for 193.960312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scwlr
	I0818 18:58:04.725647   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scwlr
	I0818 18:58:04.725655   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.725665   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.725675   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.729209   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:04.925234   25471 request.go:632] Waited for 195.408461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:04.925304   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:04.925312   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.925322   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.925328   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.928729   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:04.929254   25471 pod_ready.go:93] pod "kube-proxy-scwlr" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:04.929273   25471 pod_ready.go:82] duration metric: took 397.729124ms for pod "kube-proxy-scwlr" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.929282   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:05.125332   25471 request.go:632] Waited for 195.992024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125
	I0818 18:58:05.125402   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125
	I0818 18:58:05.125409   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.125416   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.125429   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.130487   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:58:05.325393   25471 request.go:632] Waited for 194.358945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:05.325468   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:05.325474   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.325486   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.325492   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.328765   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:05.329221   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:05.329240   25471 pod_ready.go:82] duration metric: took 399.951715ms for pod "kube-scheduler-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:05.329250   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:05.525435   25471 request.go:632] Waited for 196.100576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m02
	I0818 18:58:05.525519   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m02
	I0818 18:58:05.525529   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.525540   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.525551   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.528437   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:05.725307   25471 request.go:632] Waited for 196.364215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:05.725376   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:05.725381   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.725388   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.725392   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.728475   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:05.728977   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:05.728993   25471 pod_ready.go:82] duration metric: took 399.737599ms for pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:05.729002   25471 pod_ready.go:39] duration metric: took 3.200938183s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:58:05.729017   25471 api_server.go:52] waiting for apiserver process to appear ...
	I0818 18:58:05.729063   25471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:58:05.744662   25471 api_server.go:72] duration metric: took 22.01225621s to wait for apiserver process to appear ...
	I0818 18:58:05.744688   25471 api_server.go:88] waiting for apiserver healthz status ...
	I0818 18:58:05.744710   25471 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0818 18:58:05.749099   25471 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I0818 18:58:05.749170   25471 round_trippers.go:463] GET https://192.168.39.49:8443/version
	I0818 18:58:05.749182   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.749193   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.749197   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.750281   25471 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 18:58:05.750388   25471 api_server.go:141] control plane version: v1.31.0
	I0818 18:58:05.750405   25471 api_server.go:131] duration metric: took 5.710399ms to wait for apiserver health ...
	I0818 18:58:05.750416   25471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 18:58:05.924839   25471 request.go:632] Waited for 174.352065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:05.924890   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:05.924896   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.924903   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.924907   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.929868   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:05.933959   25471 system_pods.go:59] 17 kube-system pods found
	I0818 18:58:05.933986   25471 system_pods.go:61] "coredns-6f6b679f8f-7xr26" [d4354313-0e2d-4d96-9cd1-a8f69a4aee26] Running
	I0818 18:58:05.933992   25471 system_pods.go:61] "coredns-6f6b679f8f-q9j97" [1f1c0597-6624-4a3e-8356-7d23555c2809] Running
	I0818 18:58:05.933996   25471 system_pods.go:61] "etcd-ha-189125" [441d8b87-bb19-479f-86a3-eda66e820a81] Running
	I0818 18:58:05.934000   25471 system_pods.go:61] "etcd-ha-189125-m02" [b656f93e-ece8-41c0-b109-584cf52e7b64] Running
	I0818 18:58:05.934003   25471 system_pods.go:61] "kindnet-jwxjh" [086477c9-e6eb-403e-adc7-b15347918484] Running
	I0818 18:58:05.934006   25471 system_pods.go:61] "kindnet-qhnpv" [b23c4910-6e34-46ec-98f2-60ec7ebdd064] Running
	I0818 18:58:05.934010   25471 system_pods.go:61] "kube-apiserver-ha-189125" [707fe85b-0545-4306-aa6f-22580ddb6203] Running
	I0818 18:58:05.934013   25471 system_pods.go:61] "kube-apiserver-ha-189125-m02" [91926546-4ebb-4e81-a0eb-ffaff8d05fdc] Running
	I0818 18:58:05.934018   25471 system_pods.go:61] "kube-controller-manager-ha-189125" [97597204-06d9-4bd5-946d-3f429d2f0d35] Running
	I0818 18:58:05.934022   25471 system_pods.go:61] "kube-controller-manager-ha-189125-m02" [1a866408-5605-49f1-b183-a0c438685633] Running
	I0818 18:58:05.934025   25471 system_pods.go:61] "kube-proxy-96xwx" [c3f6dfae-e097-4889-933b-433f1b6b78fe] Running
	I0818 18:58:05.934028   25471 system_pods.go:61] "kube-proxy-scwlr" [03131eab-be49-4cb1-a0a6-1349f0f8eef7] Running
	I0818 18:58:05.934031   25471 system_pods.go:61] "kube-scheduler-ha-189125" [48202e0e-cebc-47fd-b18a-1dc6372caf8a] Running
	I0818 18:58:05.934035   25471 system_pods.go:61] "kube-scheduler-ha-189125-m02" [cc583916-30b6-46a6-ab8a-651f68065443] Running
	I0818 18:58:05.934038   25471 system_pods.go:61] "kube-vip-ha-189125" [0546880a-99fa-4d9a-a754-586b3b7921ee] Running
	I0818 18:58:05.934041   25471 system_pods.go:61] "kube-vip-ha-189125-m02" [ad04a007-45f2-4a01-97e3-202fa39a028a] Running
	I0818 18:58:05.934044   25471 system_pods.go:61] "storage-provisioner" [35b948dd-9b74-4f76-9cdb-82e0901fc421] Running
	I0818 18:58:05.934049   25471 system_pods.go:74] duration metric: took 183.626614ms to wait for pod list to return data ...
	I0818 18:58:05.934059   25471 default_sa.go:34] waiting for default service account to be created ...
	I0818 18:58:06.125476   25471 request.go:632] Waited for 191.346767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/default/serviceaccounts
	I0818 18:58:06.125538   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/default/serviceaccounts
	I0818 18:58:06.125544   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:06.125554   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:06.125559   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:06.129209   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:06.129479   25471 default_sa.go:45] found service account: "default"
	I0818 18:58:06.129501   25471 default_sa.go:55] duration metric: took 195.435484ms for default service account to be created ...
	I0818 18:58:06.129512   25471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 18:58:06.324965   25471 request.go:632] Waited for 195.377711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:06.325036   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:06.325041   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:06.325048   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:06.325052   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:06.329381   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:06.334419   25471 system_pods.go:86] 17 kube-system pods found
	I0818 18:58:06.334446   25471 system_pods.go:89] "coredns-6f6b679f8f-7xr26" [d4354313-0e2d-4d96-9cd1-a8f69a4aee26] Running
	I0818 18:58:06.334451   25471 system_pods.go:89] "coredns-6f6b679f8f-q9j97" [1f1c0597-6624-4a3e-8356-7d23555c2809] Running
	I0818 18:58:06.334457   25471 system_pods.go:89] "etcd-ha-189125" [441d8b87-bb19-479f-86a3-eda66e820a81] Running
	I0818 18:58:06.334460   25471 system_pods.go:89] "etcd-ha-189125-m02" [b656f93e-ece8-41c0-b109-584cf52e7b64] Running
	I0818 18:58:06.334464   25471 system_pods.go:89] "kindnet-jwxjh" [086477c9-e6eb-403e-adc7-b15347918484] Running
	I0818 18:58:06.334467   25471 system_pods.go:89] "kindnet-qhnpv" [b23c4910-6e34-46ec-98f2-60ec7ebdd064] Running
	I0818 18:58:06.334471   25471 system_pods.go:89] "kube-apiserver-ha-189125" [707fe85b-0545-4306-aa6f-22580ddb6203] Running
	I0818 18:58:06.334474   25471 system_pods.go:89] "kube-apiserver-ha-189125-m02" [91926546-4ebb-4e81-a0eb-ffaff8d05fdc] Running
	I0818 18:58:06.334478   25471 system_pods.go:89] "kube-controller-manager-ha-189125" [97597204-06d9-4bd5-946d-3f429d2f0d35] Running
	I0818 18:58:06.334482   25471 system_pods.go:89] "kube-controller-manager-ha-189125-m02" [1a866408-5605-49f1-b183-a0c438685633] Running
	I0818 18:58:06.334487   25471 system_pods.go:89] "kube-proxy-96xwx" [c3f6dfae-e097-4889-933b-433f1b6b78fe] Running
	I0818 18:58:06.334492   25471 system_pods.go:89] "kube-proxy-scwlr" [03131eab-be49-4cb1-a0a6-1349f0f8eef7] Running
	I0818 18:58:06.334496   25471 system_pods.go:89] "kube-scheduler-ha-189125" [48202e0e-cebc-47fd-b18a-1dc6372caf8a] Running
	I0818 18:58:06.334499   25471 system_pods.go:89] "kube-scheduler-ha-189125-m02" [cc583916-30b6-46a6-ab8a-651f68065443] Running
	I0818 18:58:06.334502   25471 system_pods.go:89] "kube-vip-ha-189125" [0546880a-99fa-4d9a-a754-586b3b7921ee] Running
	I0818 18:58:06.334505   25471 system_pods.go:89] "kube-vip-ha-189125-m02" [ad04a007-45f2-4a01-97e3-202fa39a028a] Running
	I0818 18:58:06.334508   25471 system_pods.go:89] "storage-provisioner" [35b948dd-9b74-4f76-9cdb-82e0901fc421] Running
	I0818 18:58:06.334513   25471 system_pods.go:126] duration metric: took 204.991892ms to wait for k8s-apps to be running ...
	I0818 18:58:06.334520   25471 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 18:58:06.334561   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:58:06.349147   25471 system_svc.go:56] duration metric: took 14.617419ms WaitForService to wait for kubelet
	I0818 18:58:06.349186   25471 kubeadm.go:582] duration metric: took 22.61678389s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:58:06.349210   25471 node_conditions.go:102] verifying NodePressure condition ...
	I0818 18:58:06.524534   25471 request.go:632] Waited for 175.252959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes
	I0818 18:58:06.524591   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes
	I0818 18:58:06.524610   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:06.524618   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:06.524622   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:06.528253   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:06.529126   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:58:06.529151   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:58:06.529164   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:58:06.529169   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:58:06.529175   25471 node_conditions.go:105] duration metric: took 179.959806ms to run NodePressure ...
	I0818 18:58:06.529195   25471 start.go:241] waiting for startup goroutines ...
	I0818 18:58:06.529225   25471 start.go:255] writing updated cluster config ...
	I0818 18:58:06.531778   25471 out.go:201] 
	I0818 18:58:06.533765   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:58:06.533895   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:58:06.535954   25471 out.go:177] * Starting "ha-189125-m03" control-plane node in "ha-189125" cluster
	I0818 18:58:06.537589   25471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:58:06.537616   25471 cache.go:56] Caching tarball of preloaded images
	I0818 18:58:06.537730   25471 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 18:58:06.537745   25471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 18:58:06.537887   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:58:06.538151   25471 start.go:360] acquireMachinesLock for ha-189125-m03: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 18:58:06.538215   25471 start.go:364] duration metric: took 39.455µs to acquireMachinesLock for "ha-189125-m03"
	I0818 18:58:06.538240   25471 start.go:93] Provisioning new machine with config: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:58:06.538374   25471 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0818 18:58:06.540116   25471 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 18:58:06.540221   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:58:06.540264   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:58:06.556326   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44623
	I0818 18:58:06.556846   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:58:06.557368   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:58:06.557404   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:58:06.557678   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:58:06.557843   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetMachineName
	I0818 18:58:06.557989   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:06.558146   25471 start.go:159] libmachine.API.Create for "ha-189125" (driver="kvm2")
	I0818 18:58:06.558176   25471 client.go:168] LocalClient.Create starting
	I0818 18:58:06.558212   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 18:58:06.558253   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:58:06.558273   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:58:06.558334   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 18:58:06.558359   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:58:06.558384   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:58:06.558409   25471 main.go:141] libmachine: Running pre-create checks...
	I0818 18:58:06.558420   25471 main.go:141] libmachine: (ha-189125-m03) Calling .PreCreateCheck
	I0818 18:58:06.558593   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetConfigRaw
	I0818 18:58:06.558983   25471 main.go:141] libmachine: Creating machine...
	I0818 18:58:06.558999   25471 main.go:141] libmachine: (ha-189125-m03) Calling .Create
	I0818 18:58:06.559098   25471 main.go:141] libmachine: (ha-189125-m03) Creating KVM machine...
	I0818 18:58:06.560323   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found existing default KVM network
	I0818 18:58:06.560408   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found existing private KVM network mk-ha-189125
	I0818 18:58:06.560602   25471 main.go:141] libmachine: (ha-189125-m03) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03 ...
	I0818 18:58:06.560626   25471 main.go:141] libmachine: (ha-189125-m03) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:58:06.560723   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:06.560598   26431 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:58:06.560772   25471 main.go:141] libmachine: (ha-189125-m03) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 18:58:06.794237   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:06.794110   26431 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa...
	I0818 18:58:06.891457   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:06.891293   26431 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/ha-189125-m03.rawdisk...
	I0818 18:58:06.891488   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Writing magic tar header
	I0818 18:58:06.891514   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Writing SSH key tar header
	I0818 18:58:06.891530   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:06.891449   26431 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03 ...
	I0818 18:58:06.891547   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03
	I0818 18:58:06.891614   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03 (perms=drwx------)
	I0818 18:58:06.891642   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 18:58:06.891657   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 18:58:06.891672   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 18:58:06.891684   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:58:06.891700   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 18:58:06.891714   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 18:58:06.891728   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 18:58:06.891746   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins
	I0818 18:58:06.891760   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 18:58:06.891775   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 18:58:06.891784   25471 main.go:141] libmachine: (ha-189125-m03) Creating domain...
	I0818 18:58:06.891796   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home
	I0818 18:58:06.891813   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Skipping /home - not owner
	I0818 18:58:06.892722   25471 main.go:141] libmachine: (ha-189125-m03) define libvirt domain using xml: 
	I0818 18:58:06.892737   25471 main.go:141] libmachine: (ha-189125-m03) <domain type='kvm'>
	I0818 18:58:06.892746   25471 main.go:141] libmachine: (ha-189125-m03)   <name>ha-189125-m03</name>
	I0818 18:58:06.892753   25471 main.go:141] libmachine: (ha-189125-m03)   <memory unit='MiB'>2200</memory>
	I0818 18:58:06.892761   25471 main.go:141] libmachine: (ha-189125-m03)   <vcpu>2</vcpu>
	I0818 18:58:06.892766   25471 main.go:141] libmachine: (ha-189125-m03)   <features>
	I0818 18:58:06.892775   25471 main.go:141] libmachine: (ha-189125-m03)     <acpi/>
	I0818 18:58:06.892782   25471 main.go:141] libmachine: (ha-189125-m03)     <apic/>
	I0818 18:58:06.892792   25471 main.go:141] libmachine: (ha-189125-m03)     <pae/>
	I0818 18:58:06.892802   25471 main.go:141] libmachine: (ha-189125-m03)     
	I0818 18:58:06.892812   25471 main.go:141] libmachine: (ha-189125-m03)   </features>
	I0818 18:58:06.892824   25471 main.go:141] libmachine: (ha-189125-m03)   <cpu mode='host-passthrough'>
	I0818 18:58:06.892835   25471 main.go:141] libmachine: (ha-189125-m03)   
	I0818 18:58:06.892846   25471 main.go:141] libmachine: (ha-189125-m03)   </cpu>
	I0818 18:58:06.892858   25471 main.go:141] libmachine: (ha-189125-m03)   <os>
	I0818 18:58:06.892869   25471 main.go:141] libmachine: (ha-189125-m03)     <type>hvm</type>
	I0818 18:58:06.892880   25471 main.go:141] libmachine: (ha-189125-m03)     <boot dev='cdrom'/>
	I0818 18:58:06.892890   25471 main.go:141] libmachine: (ha-189125-m03)     <boot dev='hd'/>
	I0818 18:58:06.892899   25471 main.go:141] libmachine: (ha-189125-m03)     <bootmenu enable='no'/>
	I0818 18:58:06.892913   25471 main.go:141] libmachine: (ha-189125-m03)   </os>
	I0818 18:58:06.892926   25471 main.go:141] libmachine: (ha-189125-m03)   <devices>
	I0818 18:58:06.892937   25471 main.go:141] libmachine: (ha-189125-m03)     <disk type='file' device='cdrom'>
	I0818 18:58:06.892956   25471 main.go:141] libmachine: (ha-189125-m03)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/boot2docker.iso'/>
	I0818 18:58:06.892967   25471 main.go:141] libmachine: (ha-189125-m03)       <target dev='hdc' bus='scsi'/>
	I0818 18:58:06.892979   25471 main.go:141] libmachine: (ha-189125-m03)       <readonly/>
	I0818 18:58:06.892991   25471 main.go:141] libmachine: (ha-189125-m03)     </disk>
	I0818 18:58:06.893001   25471 main.go:141] libmachine: (ha-189125-m03)     <disk type='file' device='disk'>
	I0818 18:58:06.893010   25471 main.go:141] libmachine: (ha-189125-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 18:58:06.893019   25471 main.go:141] libmachine: (ha-189125-m03)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/ha-189125-m03.rawdisk'/>
	I0818 18:58:06.893027   25471 main.go:141] libmachine: (ha-189125-m03)       <target dev='hda' bus='virtio'/>
	I0818 18:58:06.893032   25471 main.go:141] libmachine: (ha-189125-m03)     </disk>
	I0818 18:58:06.893041   25471 main.go:141] libmachine: (ha-189125-m03)     <interface type='network'>
	I0818 18:58:06.893046   25471 main.go:141] libmachine: (ha-189125-m03)       <source network='mk-ha-189125'/>
	I0818 18:58:06.893051   25471 main.go:141] libmachine: (ha-189125-m03)       <model type='virtio'/>
	I0818 18:58:06.893059   25471 main.go:141] libmachine: (ha-189125-m03)     </interface>
	I0818 18:58:06.893065   25471 main.go:141] libmachine: (ha-189125-m03)     <interface type='network'>
	I0818 18:58:06.893077   25471 main.go:141] libmachine: (ha-189125-m03)       <source network='default'/>
	I0818 18:58:06.893088   25471 main.go:141] libmachine: (ha-189125-m03)       <model type='virtio'/>
	I0818 18:58:06.893100   25471 main.go:141] libmachine: (ha-189125-m03)     </interface>
	I0818 18:58:06.893110   25471 main.go:141] libmachine: (ha-189125-m03)     <serial type='pty'>
	I0818 18:58:06.893118   25471 main.go:141] libmachine: (ha-189125-m03)       <target port='0'/>
	I0818 18:58:06.893123   25471 main.go:141] libmachine: (ha-189125-m03)     </serial>
	I0818 18:58:06.893130   25471 main.go:141] libmachine: (ha-189125-m03)     <console type='pty'>
	I0818 18:58:06.893138   25471 main.go:141] libmachine: (ha-189125-m03)       <target type='serial' port='0'/>
	I0818 18:58:06.893143   25471 main.go:141] libmachine: (ha-189125-m03)     </console>
	I0818 18:58:06.893166   25471 main.go:141] libmachine: (ha-189125-m03)     <rng model='virtio'>
	I0818 18:58:06.893180   25471 main.go:141] libmachine: (ha-189125-m03)       <backend model='random'>/dev/random</backend>
	I0818 18:58:06.893190   25471 main.go:141] libmachine: (ha-189125-m03)     </rng>
	I0818 18:58:06.893200   25471 main.go:141] libmachine: (ha-189125-m03)     
	I0818 18:58:06.893207   25471 main.go:141] libmachine: (ha-189125-m03)     
	I0818 18:58:06.893217   25471 main.go:141] libmachine: (ha-189125-m03)   </devices>
	I0818 18:58:06.893225   25471 main.go:141] libmachine: (ha-189125-m03) </domain>
	I0818 18:58:06.893231   25471 main.go:141] libmachine: (ha-189125-m03) 
	I0818 18:58:06.901511   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:ad:03:e4 in network default
	I0818 18:58:06.902086   25471 main.go:141] libmachine: (ha-189125-m03) Ensuring networks are active...
	I0818 18:58:06.902129   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:06.903085   25471 main.go:141] libmachine: (ha-189125-m03) Ensuring network default is active
	I0818 18:58:06.903554   25471 main.go:141] libmachine: (ha-189125-m03) Ensuring network mk-ha-189125 is active
	I0818 18:58:06.903905   25471 main.go:141] libmachine: (ha-189125-m03) Getting domain xml...
	I0818 18:58:06.904891   25471 main.go:141] libmachine: (ha-189125-m03) Creating domain...
	I0818 18:58:08.152868   25471 main.go:141] libmachine: (ha-189125-m03) Waiting to get IP...
	I0818 18:58:08.153689   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:08.154064   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:08.154122   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:08.154055   26431 retry.go:31] will retry after 268.490085ms: waiting for machine to come up
	I0818 18:58:08.424531   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:08.425036   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:08.425065   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:08.424979   26431 retry.go:31] will retry after 316.367894ms: waiting for machine to come up
	I0818 18:58:08.742560   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:08.743048   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:08.743069   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:08.743020   26431 retry.go:31] will retry after 371.13386ms: waiting for machine to come up
	I0818 18:58:09.115801   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:09.116351   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:09.116396   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:09.116284   26431 retry.go:31] will retry after 397.759321ms: waiting for machine to come up
	I0818 18:58:09.515854   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:09.516285   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:09.516316   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:09.516238   26431 retry.go:31] will retry after 578.790648ms: waiting for machine to come up
	I0818 18:58:10.097094   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:10.097525   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:10.097551   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:10.097469   26431 retry.go:31] will retry after 721.378969ms: waiting for machine to come up
	I0818 18:58:10.820162   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:10.820625   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:10.820653   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:10.820524   26431 retry.go:31] will retry after 1.086370836s: waiting for machine to come up
	I0818 18:58:11.908115   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:11.908506   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:11.908533   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:11.908493   26431 retry.go:31] will retry after 1.087510486s: waiting for machine to come up
	I0818 18:58:12.997612   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:12.998073   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:12.998106   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:12.998005   26431 retry.go:31] will retry after 1.209672816s: waiting for machine to come up
	I0818 18:58:14.209366   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:14.209806   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:14.209833   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:14.209757   26431 retry.go:31] will retry after 1.547070722s: waiting for machine to come up
	I0818 18:58:15.759631   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:15.760118   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:15.760146   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:15.760096   26431 retry.go:31] will retry after 2.328434742s: waiting for machine to come up
	I0818 18:58:18.091165   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:18.091673   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:18.091700   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:18.091630   26431 retry.go:31] will retry after 3.093157403s: waiting for machine to come up
	I0818 18:58:21.188443   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:21.188880   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:21.188904   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:21.188824   26431 retry.go:31] will retry after 4.344973301s: waiting for machine to come up
	I0818 18:58:25.536417   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:25.536845   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:25.536872   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:25.536798   26431 retry.go:31] will retry after 4.579228582s: waiting for machine to come up
	I0818 18:58:30.120729   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.120845   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has current primary IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.120862   25471 main.go:141] libmachine: (ha-189125-m03) Found IP for machine: 192.168.39.170
	I0818 18:58:30.120888   25471 main.go:141] libmachine: (ha-189125-m03) Reserving static IP address...
	I0818 18:58:30.121350   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find host DHCP lease matching {name: "ha-189125-m03", mac: "52:54:00:df:db:3a", ip: "192.168.39.170"} in network mk-ha-189125
	I0818 18:58:30.195549   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Getting to WaitForSSH function...
	I0818 18:58:30.195577   25471 main.go:141] libmachine: (ha-189125-m03) Reserved static IP address: 192.168.39.170
	I0818 18:58:30.195589   25471 main.go:141] libmachine: (ha-189125-m03) Waiting for SSH to be available...
	I0818 18:58:30.199159   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.199865   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.199895   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.200103   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Using SSH client type: external
	I0818 18:58:30.200141   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa (-rw-------)
	I0818 18:58:30.200171   25471 main.go:141] libmachine: (ha-189125-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 18:58:30.200192   25471 main.go:141] libmachine: (ha-189125-m03) DBG | About to run SSH command:
	I0818 18:58:30.200207   25471 main.go:141] libmachine: (ha-189125-m03) DBG | exit 0
	I0818 18:58:30.335735   25471 main.go:141] libmachine: (ha-189125-m03) DBG | SSH cmd err, output: <nil>: 
	I0818 18:58:30.335930   25471 main.go:141] libmachine: (ha-189125-m03) KVM machine creation complete!
	I0818 18:58:30.336254   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetConfigRaw
	I0818 18:58:30.336815   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:30.337015   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:30.337157   25471 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 18:58:30.337169   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 18:58:30.338391   25471 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 18:58:30.338407   25471 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 18:58:30.338416   25471 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 18:58:30.338423   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.340512   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.340848   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.340875   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.341030   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.341194   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.341363   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.341507   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.341669   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:30.341934   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:30.341947   25471 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 18:58:30.454732   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:58:30.454760   25471 main.go:141] libmachine: Detecting the provisioner...
	I0818 18:58:30.454771   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.457654   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.458020   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.458051   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.458166   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.458365   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.458543   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.458682   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.458850   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:30.459053   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:30.459067   25471 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 18:58:30.572018   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 18:58:30.572095   25471 main.go:141] libmachine: found compatible host: buildroot
	I0818 18:58:30.572108   25471 main.go:141] libmachine: Provisioning with buildroot...
	I0818 18:58:30.572124   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetMachineName
	I0818 18:58:30.572363   25471 buildroot.go:166] provisioning hostname "ha-189125-m03"
	I0818 18:58:30.572397   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetMachineName
	I0818 18:58:30.572552   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.575238   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.575618   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.575646   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.575812   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.575983   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.576145   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.576274   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.576408   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:30.576602   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:30.576614   25471 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-189125-m03 && echo "ha-189125-m03" | sudo tee /etc/hostname
	I0818 18:58:30.707730   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125-m03
	
	I0818 18:58:30.707760   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.710383   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.710718   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.710742   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.710940   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.711111   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.711243   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.711352   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.711506   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:30.711666   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:30.711681   25471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-189125-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-189125-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-189125-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:58:30.834309   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:58:30.834343   25471 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 18:58:30.834362   25471 buildroot.go:174] setting up certificates
	I0818 18:58:30.834373   25471 provision.go:84] configureAuth start
	I0818 18:58:30.834386   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetMachineName
	I0818 18:58:30.834651   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 18:58:30.837186   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.837472   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.837505   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.837670   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.840052   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.840424   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.840446   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.840564   25471 provision.go:143] copyHostCerts
	I0818 18:58:30.840588   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:58:30.840619   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 18:58:30.840631   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:58:30.840693   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 18:58:30.840773   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:58:30.840793   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 18:58:30.840799   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:58:30.840839   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 18:58:30.840891   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:58:30.840916   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 18:58:30.840925   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:58:30.840957   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 18:58:30.841147   25471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.ha-189125-m03 san=[127.0.0.1 192.168.39.170 ha-189125-m03 localhost minikube]
	I0818 18:58:30.904128   25471 provision.go:177] copyRemoteCerts
	I0818 18:58:30.904182   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:58:30.904207   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.906881   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.907285   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.907312   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.907508   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.907702   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.907863   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.907977   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 18:58:30.994118   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 18:58:30.994199   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 18:58:31.020830   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 18:58:31.020916   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 18:58:31.046410   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 18:58:31.046483   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 18:58:31.071787   25471 provision.go:87] duration metric: took 237.40302ms to configureAuth
	I0818 18:58:31.071814   25471 buildroot.go:189] setting minikube options for container-runtime
	I0818 18:58:31.072024   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:58:31.072095   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:31.074367   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.074828   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.074856   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.075151   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.075397   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.075554   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.075687   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.075835   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:31.075988   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:31.076001   25471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 18:58:31.355802   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 18:58:31.355827   25471 main.go:141] libmachine: Checking connection to Docker...
	I0818 18:58:31.355835   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetURL
	I0818 18:58:31.357216   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Using libvirt version 6000000
	I0818 18:58:31.359482   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.359881   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.359906   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.360077   25471 main.go:141] libmachine: Docker is up and running!
	I0818 18:58:31.360099   25471 main.go:141] libmachine: Reticulating splines...
	I0818 18:58:31.360106   25471 client.go:171] duration metric: took 24.801921523s to LocalClient.Create
	I0818 18:58:31.360132   25471 start.go:167] duration metric: took 24.801986295s to libmachine.API.Create "ha-189125"
	I0818 18:58:31.360144   25471 start.go:293] postStartSetup for "ha-189125-m03" (driver="kvm2")
	I0818 18:58:31.360155   25471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:58:31.360176   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.360402   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:58:31.360425   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:31.362382   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.362798   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.362824   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.363003   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.363188   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.363313   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.363486   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 18:58:31.455310   25471 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:58:31.459841   25471 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 18:58:31.459866   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 18:58:31.459944   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 18:58:31.460020   25471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 18:58:31.460029   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 18:58:31.460106   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 18:58:31.470112   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:58:31.494095   25471 start.go:296] duration metric: took 133.937124ms for postStartSetup
	I0818 18:58:31.494145   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetConfigRaw
	I0818 18:58:31.494662   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 18:58:31.496929   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.497280   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.497308   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.497538   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:58:31.497775   25471 start.go:128] duration metric: took 24.959388213s to createHost
	I0818 18:58:31.497799   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:31.500075   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.500410   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.500446   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.500609   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.500806   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.501007   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.501155   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.501310   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:31.501472   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:31.501482   25471 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 18:58:31.616447   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007511.591856029
	
	I0818 18:58:31.616469   25471 fix.go:216] guest clock: 1724007511.591856029
	I0818 18:58:31.616477   25471 fix.go:229] Guest: 2024-08-18 18:58:31.591856029 +0000 UTC Remote: 2024-08-18 18:58:31.497787799 +0000 UTC m=+194.702784877 (delta=94.06823ms)
	I0818 18:58:31.616492   25471 fix.go:200] guest clock delta is within tolerance: 94.06823ms
	I0818 18:58:31.616499   25471 start.go:83] releasing machines lock for "ha-189125-m03", held for 25.078270959s
	I0818 18:58:31.616519   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.616743   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 18:58:31.619040   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.619414   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.619457   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.621834   25471 out.go:177] * Found network options:
	I0818 18:58:31.623264   25471 out.go:177]   - NO_PROXY=192.168.39.49,192.168.39.147
	W0818 18:58:31.624565   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 18:58:31.624590   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 18:58:31.624602   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.625157   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.625369   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.625466   25471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:58:31.625499   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	W0818 18:58:31.625588   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 18:58:31.625613   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 18:58:31.625676   25471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 18:58:31.625698   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:31.628154   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.628524   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.628550   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.628600   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.628696   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.628859   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.628995   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.629014   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.629018   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.629155   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 18:58:31.629191   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.629330   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.629451   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.629608   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 18:58:31.876456   25471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 18:58:31.882444   25471 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 18:58:31.882510   25471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:58:31.899344   25471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 18:58:31.899370   25471 start.go:495] detecting cgroup driver to use...
	I0818 18:58:31.899444   25471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 18:58:31.916882   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:58:31.931105   25471 docker.go:217] disabling cri-docker service (if available) ...
	I0818 18:58:31.931154   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 18:58:31.947568   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 18:58:31.961682   25471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 18:58:32.087953   25471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 18:58:32.234741   25471 docker.go:233] disabling docker service ...
	I0818 18:58:32.234800   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 18:58:32.249234   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 18:58:32.264814   25471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 18:58:32.414355   25471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 18:58:32.534121   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 18:58:32.548870   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:58:32.567567   25471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 18:58:32.567637   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.578656   25471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 18:58:32.578731   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.589401   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.600042   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.614032   25471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:58:32.627923   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.641750   25471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.661479   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.673027   25471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:58:32.683174   25471 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 18:58:32.683230   25471 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 18:58:32.696444   25471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:58:32.706515   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:58:32.830513   25471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 18:58:32.978738   25471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 18:58:32.978817   25471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 18:58:32.983525   25471 start.go:563] Will wait 60s for crictl version
	I0818 18:58:32.983587   25471 ssh_runner.go:195] Run: which crictl
	I0818 18:58:32.987190   25471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:58:33.031555   25471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 18:58:33.031636   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:58:33.065888   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:58:33.098732   25471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 18:58:33.100160   25471 out.go:177]   - env NO_PROXY=192.168.39.49
	I0818 18:58:33.101438   25471 out.go:177]   - env NO_PROXY=192.168.39.49,192.168.39.147
	I0818 18:58:33.102607   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 18:58:33.105330   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:33.105644   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:33.105669   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:33.105878   25471 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 18:58:33.110328   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:58:33.123156   25471 mustload.go:65] Loading cluster: ha-189125
	I0818 18:58:33.123440   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:58:33.123746   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:58:33.123791   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:58:33.139743   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I0818 18:58:33.140162   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:58:33.140686   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:58:33.140707   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:58:33.140989   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:58:33.141179   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:58:33.142679   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:58:33.142947   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:58:33.142978   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:58:33.156852   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I0818 18:58:33.157260   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:58:33.157661   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:58:33.157679   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:58:33.157939   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:58:33.158063   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:58:33.158232   25471 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125 for IP: 192.168.39.170
	I0818 18:58:33.158242   25471 certs.go:194] generating shared ca certs ...
	I0818 18:58:33.158256   25471 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:58:33.158398   25471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 18:58:33.158454   25471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 18:58:33.158466   25471 certs.go:256] generating profile certs ...
	I0818 18:58:33.158557   25471 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key
	I0818 18:58:33.158587   25471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ed2123f4
	I0818 18:58:33.158607   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ed2123f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.49 192.168.39.147 192.168.39.170 192.168.39.254]
	I0818 18:58:33.272120   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ed2123f4 ...
	I0818 18:58:33.272147   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ed2123f4: {Name:mkeed75f0c4d827541cbfb95863e2cd154b9d88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:58:33.272346   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ed2123f4 ...
	I0818 18:58:33.272363   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ed2123f4: {Name:mkf3adaf9587675fabd0a13e2c88f3c36ecccf12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:58:33.272460   25471 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ed2123f4 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt
	I0818 18:58:33.272617   25471 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ed2123f4 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key
	I0818 18:58:33.272783   25471 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key
	I0818 18:58:33.272803   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 18:58:33.272824   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 18:58:33.272843   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 18:58:33.272861   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 18:58:33.272877   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 18:58:33.272893   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 18:58:33.272910   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 18:58:33.272928   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 18:58:33.272989   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 18:58:33.273026   25471 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 18:58:33.273038   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 18:58:33.273073   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 18:58:33.273102   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:58:33.273134   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 18:58:33.273188   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:58:33.273228   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 18:58:33.273248   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:58:33.273268   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 18:58:33.273310   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:58:33.276139   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:58:33.276588   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:58:33.276611   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:58:33.276790   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:58:33.276977   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:58:33.277136   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:58:33.277316   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:58:33.347784   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 18:58:33.352946   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 18:58:33.366917   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 18:58:33.371451   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0818 18:58:33.381894   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 18:58:33.386039   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 18:58:33.396178   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 18:58:33.400809   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 18:58:33.411811   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 18:58:33.416558   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 18:58:33.427317   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 18:58:33.431864   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 18:58:33.442548   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:58:33.467101   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 18:58:33.490738   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:58:33.514519   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 18:58:33.538233   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0818 18:58:33.562633   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 18:58:33.585453   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:58:33.608469   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 18:58:33.632275   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 18:58:33.655374   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:58:33.678478   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 18:58:33.701922   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 18:58:33.717767   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0818 18:58:33.733671   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 18:58:33.750087   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 18:58:33.766511   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 18:58:33.783020   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 18:58:33.800197   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 18:58:33.817708   25471 ssh_runner.go:195] Run: openssl version
	I0818 18:58:33.823459   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 18:58:33.834484   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 18:58:33.838909   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 18:58:33.838964   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 18:58:33.844921   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 18:58:33.856075   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 18:58:33.869016   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 18:58:33.873465   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 18:58:33.873530   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 18:58:33.879132   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 18:58:33.890574   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:58:33.901547   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:58:33.906147   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:58:33.906195   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:58:33.911951   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:58:33.923248   25471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:58:33.927548   25471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 18:58:33.927603   25471 kubeadm.go:934] updating node {m03 192.168.39.170 8443 v1.31.0 crio true true} ...
	I0818 18:58:33.927694   25471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-189125-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:58:33.927727   25471 kube-vip.go:115] generating kube-vip config ...
	I0818 18:58:33.927764   25471 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 18:58:33.945838   25471 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 18:58:33.945905   25471 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 18:58:33.945973   25471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:58:33.956467   25471 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0818 18:58:33.956529   25471 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0818 18:58:33.966885   25471 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0818 18:58:33.966905   25471 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0818 18:58:33.966921   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0818 18:58:33.966893   25471 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0818 18:58:33.966935   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:58:33.966946   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0818 18:58:33.966998   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0818 18:58:33.967008   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0818 18:58:33.984099   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0818 18:58:33.984137   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0818 18:58:33.984162   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0818 18:58:33.984162   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0818 18:58:33.984206   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0818 18:58:33.984259   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0818 18:58:34.010439   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0818 18:58:34.010474   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0818 18:58:34.847912   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 18:58:34.858200   25471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0818 18:58:34.877745   25471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:58:34.896957   25471 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0818 18:58:34.914953   25471 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0818 18:58:34.919267   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:58:34.932031   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:58:35.057593   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:58:35.074714   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:58:35.075157   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:58:35.075208   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:58:35.091741   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41985
	I0818 18:58:35.092123   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:58:35.092638   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:58:35.092663   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:58:35.093017   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:58:35.093204   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:58:35.093346   25471 start.go:317] joinCluster: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:58:35.093478   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0818 18:58:35.093501   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:58:35.096426   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:58:35.096858   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:58:35.096881   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:58:35.097056   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:58:35.097230   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:58:35.097359   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:58:35.097457   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:58:35.242678   25471 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:58:35.242733   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pfh3i3.msd5m9hr91q8t3xk --discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-189125-m03 --control-plane --apiserver-advertise-address=192.168.39.170 --apiserver-bind-port=8443"
	I0818 18:58:58.086701   25471 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pfh3i3.msd5m9hr91q8t3xk --discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-189125-m03 --control-plane --apiserver-advertise-address=192.168.39.170 --apiserver-bind-port=8443": (22.843941512s)
	I0818 18:58:58.086735   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0818 18:58:58.612445   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-189125-m03 minikube.k8s.io/updated_at=2024_08_18T18_58_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=ha-189125 minikube.k8s.io/primary=false
	I0818 18:58:58.749208   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-189125-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0818 18:58:58.875477   25471 start.go:319] duration metric: took 23.782126807s to joinCluster
	I0818 18:58:58.875549   25471 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:58:58.875909   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:58:58.876983   25471 out.go:177] * Verifying Kubernetes components...
	I0818 18:58:58.878171   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:58:59.154035   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:58:59.172953   25471 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:58:59.173200   25471 kapi.go:59] client config for ha-189125: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key", CAFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 18:58:59.173255   25471 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.49:8443
	I0818 18:58:59.173483   25471 node_ready.go:35] waiting up to 6m0s for node "ha-189125-m03" to be "Ready" ...
	I0818 18:58:59.173569   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:58:59.173577   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:59.173585   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:59.173591   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:59.178424   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:59.674723   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:58:59.674750   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:59.674760   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:59.674767   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:59.678690   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:00.174129   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:00.174154   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:00.174161   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:00.174165   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:00.177450   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:00.674473   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:00.674500   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:00.674512   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:00.674518   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:00.677814   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:01.173685   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:01.173711   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:01.173723   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:01.173728   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:01.180323   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:01.180865   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:01.674488   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:01.674505   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:01.674512   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:01.674517   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:01.678917   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:02.174653   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:02.174675   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:02.174683   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:02.174687   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:02.177992   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:02.674257   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:02.674279   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:02.674289   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:02.674297   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:02.677513   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:03.173931   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:03.173951   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:03.173960   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:03.173965   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:03.177328   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:03.674447   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:03.674467   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:03.674475   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:03.674479   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:03.682440   25471 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0818 18:59:03.683040   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:04.174290   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:04.174312   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:04.174320   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:04.174325   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:04.177866   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:04.673718   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:04.673747   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:04.673759   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:04.673764   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:04.678150   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:05.174737   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:05.174767   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:05.174778   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:05.174786   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:05.178667   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:05.674455   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:05.674478   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:05.674489   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:05.674493   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:05.677845   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:06.173993   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:06.174014   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:06.174023   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:06.174028   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:06.177530   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:06.178175   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:06.673920   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:06.673941   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:06.673947   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:06.673952   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:06.677189   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:07.173837   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:07.173860   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:07.173867   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:07.173871   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:07.177228   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:07.674588   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:07.674616   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:07.674628   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:07.674633   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:07.678248   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:08.174659   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:08.174679   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:08.174688   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:08.174691   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:08.178126   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:08.178982   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:08.674418   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:08.674439   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:08.674447   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:08.674451   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:08.677884   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:09.173981   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:09.174006   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:09.174014   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:09.174022   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:09.177121   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:09.673871   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:09.673892   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:09.673900   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:09.673904   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:09.677228   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:10.174694   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:10.174720   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:10.174731   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:10.174740   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:10.178221   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:10.674236   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:10.674255   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:10.674263   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:10.674267   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:10.678799   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:10.679351   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:11.173724   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:11.173746   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:11.173753   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:11.173757   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:11.177123   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:11.674212   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:11.674233   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:11.674242   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:11.674247   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:11.677388   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:12.174158   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:12.174180   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:12.174186   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:12.174189   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:12.177934   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:12.674195   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:12.674220   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:12.674228   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:12.674233   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:12.678167   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:13.174434   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:13.174454   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:13.174464   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:13.174471   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:13.178435   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:13.179175   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:13.674545   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:13.674564   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:13.674573   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:13.674578   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:13.678002   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:14.173921   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:14.173943   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:14.173951   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:14.173955   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:14.176924   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:14.674549   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:14.674572   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:14.674582   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:14.674590   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:14.678491   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:15.173905   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:15.173930   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:15.173940   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:15.173947   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:15.177556   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:15.674219   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:15.674253   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:15.674263   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:15.674270   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:15.677109   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:15.677760   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:16.173818   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:16.173839   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.173848   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.173853   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.177645   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:16.674199   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:16.674221   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.674229   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.674232   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.677227   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.677904   25471 node_ready.go:49] node "ha-189125-m03" has status "Ready":"True"
	I0818 18:59:16.677921   25471 node_ready.go:38] duration metric: took 17.504421939s for node "ha-189125-m03" to be "Ready" ...
	I0818 18:59:16.677932   25471 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:59:16.678010   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:16.678021   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.678032   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.678038   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.684399   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:16.690781   25471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.690854   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-7xr26
	I0818 18:59:16.690864   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.690871   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.690881   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.693466   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.694169   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:16.694189   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.694196   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.694200   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.696615   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.697128   25471 pod_ready.go:93] pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:16.697144   25471 pod_ready.go:82] duration metric: took 6.341179ms for pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.697165   25471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.697213   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q9j97
	I0818 18:59:16.697222   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.697232   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.697239   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.699879   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.700461   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:16.700476   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.700486   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.700492   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.703993   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:16.704464   25471 pod_ready.go:93] pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:16.704479   25471 pod_ready.go:82] duration metric: took 7.306351ms for pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.704488   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.704543   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125
	I0818 18:59:16.704551   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.704558   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.704562   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.711062   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:16.711643   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:16.711666   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.711676   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.711682   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.714117   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.714620   25471 pod_ready.go:93] pod "etcd-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:16.714637   25471 pod_ready.go:82] duration metric: took 10.14269ms for pod "etcd-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.714648   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.714700   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125-m02
	I0818 18:59:16.714710   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.714719   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.714727   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.717370   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.718114   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:16.718126   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.718136   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.718140   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.720601   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.721175   25471 pod_ready.go:93] pod "etcd-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:16.721199   25471 pod_ready.go:82] duration metric: took 6.534639ms for pod "etcd-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.721211   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.874560   25471 request.go:632] Waited for 153.286293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125-m03
	I0818 18:59:16.874652   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125-m03
	I0818 18:59:16.874663   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.874672   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.874680   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.878076   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.075120   25471 request.go:632] Waited for 196.23254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:17.075204   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:17.075211   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.075219   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.075228   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.078611   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.079067   25471 pod_ready.go:93] pod "etcd-ha-189125-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:17.079084   25471 pod_ready.go:82] duration metric: took 357.865975ms for pod "etcd-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.079099   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.275205   25471 request.go:632] Waited for 196.036569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125
	I0818 18:59:17.275264   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125
	I0818 18:59:17.275269   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.275279   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.275284   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.278437   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.474562   25471 request.go:632] Waited for 195.375209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:17.474621   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:17.474627   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.474634   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.474638   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.477744   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.478408   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:17.478426   25471 pod_ready.go:82] duration metric: took 399.321932ms for pod "kube-apiserver-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.478435   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.674327   25471 request.go:632] Waited for 195.821614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m02
	I0818 18:59:17.674379   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m02
	I0818 18:59:17.674387   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.674397   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.674405   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.677541   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.874519   25471 request.go:632] Waited for 196.189776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:17.874616   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:17.874624   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.874633   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.874639   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.878226   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.879100   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:17.879117   25471 pod_ready.go:82] duration metric: took 400.676092ms for pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.879125   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.074583   25471 request.go:632] Waited for 195.394392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m03
	I0818 18:59:18.074659   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m03
	I0818 18:59:18.074664   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.074672   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.074678   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.078111   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:18.274617   25471 request.go:632] Waited for 195.740226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:18.274665   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:18.274670   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.274677   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.274681   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.277955   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:18.278582   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:18.278598   25471 pod_ready.go:82] duration metric: took 399.467222ms for pod "kube-apiserver-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.278607   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.474769   25471 request.go:632] Waited for 196.09145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125
	I0818 18:59:18.474819   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125
	I0818 18:59:18.474824   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.474831   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.474836   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.478936   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:18.674303   25471 request.go:632] Waited for 192.656613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:18.674366   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:18.674374   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.674384   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.674396   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.681387   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:18.682195   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:18.682252   25471 pod_ready.go:82] duration metric: took 403.636564ms for pod "kube-controller-manager-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.682269   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.874248   25471 request.go:632] Waited for 191.908122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m02
	I0818 18:59:18.874323   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m02
	I0818 18:59:18.874328   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.874336   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.874348   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.877687   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.074828   25471 request.go:632] Waited for 196.356111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:19.074879   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:19.074884   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.074892   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.074896   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.078205   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.078748   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:19.078766   25471 pod_ready.go:82] duration metric: took 396.490052ms for pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.078776   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.274725   25471 request.go:632] Waited for 195.892952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m03
	I0818 18:59:19.274816   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m03
	I0818 18:59:19.274828   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.274839   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.274848   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.278314   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.474318   25471 request.go:632] Waited for 195.307964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:19.474388   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:19.474393   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.474401   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.474406   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.478526   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:19.479040   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:19.479061   25471 pod_ready.go:82] duration metric: took 400.279756ms for pod "kube-controller-manager-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.479071   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-22f8v" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.675115   25471 request.go:632] Waited for 195.971823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22f8v
	I0818 18:59:19.675214   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22f8v
	I0818 18:59:19.675223   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.675233   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.675240   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.678741   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.874999   25471 request.go:632] Waited for 195.375691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:19.875058   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:19.875075   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.875082   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.875086   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.878622   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.879349   25471 pod_ready.go:93] pod "kube-proxy-22f8v" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:19.879367   25471 pod_ready.go:82] duration metric: took 400.289102ms for pod "kube-proxy-22f8v" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.879397   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-96xwx" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.074522   25471 request.go:632] Waited for 195.044509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-96xwx
	I0818 18:59:20.074589   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-96xwx
	I0818 18:59:20.074594   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.074601   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.074605   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.077810   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:20.274920   25471 request.go:632] Waited for 196.321217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:20.274997   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:20.275004   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.275016   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.275026   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.278538   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:20.279137   25471 pod_ready.go:93] pod "kube-proxy-96xwx" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:20.279154   25471 pod_ready.go:82] duration metric: took 399.750001ms for pod "kube-proxy-96xwx" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.279165   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-scwlr" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.474246   25471 request.go:632] Waited for 195.025426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scwlr
	I0818 18:59:20.474332   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scwlr
	I0818 18:59:20.474339   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.474350   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.474355   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.477715   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:20.675028   25471 request.go:632] Waited for 196.381301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:20.675108   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:20.675117   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.675125   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.675131   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.678645   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:20.679602   25471 pod_ready.go:93] pod "kube-proxy-scwlr" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:20.679621   25471 pod_ready.go:82] duration metric: took 400.448549ms for pod "kube-proxy-scwlr" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.679631   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.874611   25471 request.go:632] Waited for 194.912911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125
	I0818 18:59:20.874680   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125
	I0818 18:59:20.874688   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.874699   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.874720   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.877830   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.074973   25471 request.go:632] Waited for 196.353479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:21.075035   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:21.075042   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.075051   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.075066   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.078025   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:21.078680   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:21.078701   25471 pod_ready.go:82] duration metric: took 399.062562ms for pod "kube-scheduler-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.078710   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.274779   25471 request.go:632] Waited for 196.015279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m02
	I0818 18:59:21.274839   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m02
	I0818 18:59:21.274843   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.274851   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.274860   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.278054   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.475025   25471 request.go:632] Waited for 196.356085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:21.475083   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:21.475090   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.475100   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.475110   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.478447   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.478980   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:21.478995   25471 pod_ready.go:82] duration metric: took 400.280156ms for pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.479005   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.674686   25471 request.go:632] Waited for 195.59456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m03
	I0818 18:59:21.674739   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m03
	I0818 18:59:21.674744   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.674751   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.674757   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.678130   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.874970   25471 request.go:632] Waited for 196.153286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:21.875042   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:21.875048   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.875055   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.875059   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.878472   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.878998   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:21.879018   25471 pod_ready.go:82] duration metric: took 400.005768ms for pod "kube-scheduler-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.879030   25471 pod_ready.go:39] duration metric: took 5.201085905s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:59:21.879047   25471 api_server.go:52] waiting for apiserver process to appear ...
	I0818 18:59:21.879110   25471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:59:21.897264   25471 api_server.go:72] duration metric: took 23.02167788s to wait for apiserver process to appear ...
	I0818 18:59:21.897292   25471 api_server.go:88] waiting for apiserver healthz status ...
	I0818 18:59:21.897313   25471 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0818 18:59:21.901779   25471 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I0818 18:59:21.901848   25471 round_trippers.go:463] GET https://192.168.39.49:8443/version
	I0818 18:59:21.901859   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.901869   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.901877   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.902704   25471 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0818 18:59:21.902762   25471 api_server.go:141] control plane version: v1.31.0
	I0818 18:59:21.902779   25471 api_server.go:131] duration metric: took 5.47891ms to wait for apiserver health ...
	I0818 18:59:21.902787   25471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 18:59:22.074544   25471 request.go:632] Waited for 171.697311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:22.074606   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:22.074613   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:22.074624   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:22.074629   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:22.081124   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:22.088910   25471 system_pods.go:59] 24 kube-system pods found
	I0818 18:59:22.088939   25471 system_pods.go:61] "coredns-6f6b679f8f-7xr26" [d4354313-0e2d-4d96-9cd1-a8f69a4aee26] Running
	I0818 18:59:22.088944   25471 system_pods.go:61] "coredns-6f6b679f8f-q9j97" [1f1c0597-6624-4a3e-8356-7d23555c2809] Running
	I0818 18:59:22.088949   25471 system_pods.go:61] "etcd-ha-189125" [441d8b87-bb19-479f-86a3-eda66e820a81] Running
	I0818 18:59:22.088952   25471 system_pods.go:61] "etcd-ha-189125-m02" [b656f93e-ece8-41c0-b109-584cf52e7b64] Running
	I0818 18:59:22.088955   25471 system_pods.go:61] "etcd-ha-189125-m03" [6e53b8eb-e64c-48db-8b5d-cd7c0dca3be5] Running
	I0818 18:59:22.088959   25471 system_pods.go:61] "kindnet-24xql" [ba1034b3-04c9-4c64-8fde-7b45ea42f21c] Running
	I0818 18:59:22.088963   25471 system_pods.go:61] "kindnet-jwxjh" [086477c9-e6eb-403e-adc7-b15347918484] Running
	I0818 18:59:22.088967   25471 system_pods.go:61] "kindnet-qhnpv" [b23c4910-6e34-46ec-98f2-60ec7ebdd064] Running
	I0818 18:59:22.088973   25471 system_pods.go:61] "kube-apiserver-ha-189125" [707fe85b-0545-4306-aa6f-22580ddb6203] Running
	I0818 18:59:22.088977   25471 system_pods.go:61] "kube-apiserver-ha-189125-m02" [91926546-4ebb-4e81-a0eb-ffaff8d05fdc] Running
	I0818 18:59:22.088982   25471 system_pods.go:61] "kube-apiserver-ha-189125-m03" [51f30627-fb00-4c82-a07f-e4b43a1e1575] Running
	I0818 18:59:22.088991   25471 system_pods.go:61] "kube-controller-manager-ha-189125" [97597204-06d9-4bd5-946d-3f429d2f0d35] Running
	I0818 18:59:22.088997   25471 system_pods.go:61] "kube-controller-manager-ha-189125-m02" [1a866408-5605-49f1-b183-a0c438685633] Running
	I0818 18:59:22.089004   25471 system_pods.go:61] "kube-controller-manager-ha-189125-m03" [128f040d-6a09-4c72-bf20-b7289d2a0708] Running
	I0818 18:59:22.089010   25471 system_pods.go:61] "kube-proxy-22f8v" [446b7123-e92b-4ce3-b3a4-d096e00ea7e9] Running
	I0818 18:59:22.089017   25471 system_pods.go:61] "kube-proxy-96xwx" [c3f6dfae-e097-4889-933b-433f1b6b78fe] Running
	I0818 18:59:22.089025   25471 system_pods.go:61] "kube-proxy-scwlr" [03131eab-be49-4cb1-a0a6-1349f0f8eef7] Running
	I0818 18:59:22.089028   25471 system_pods.go:61] "kube-scheduler-ha-189125" [48202e0e-cebc-47fd-b18a-1dc6372caf8a] Running
	I0818 18:59:22.089034   25471 system_pods.go:61] "kube-scheduler-ha-189125-m02" [cc583916-30b6-46a6-ab8a-651f68065443] Running
	I0818 18:59:22.089037   25471 system_pods.go:61] "kube-scheduler-ha-189125-m03" [c73cba87-81c0-4389-94f3-21b49a085a05] Running
	I0818 18:59:22.089041   25471 system_pods.go:61] "kube-vip-ha-189125" [0546880a-99fa-4d9a-a754-586b3b7921ee] Running
	I0818 18:59:22.089044   25471 system_pods.go:61] "kube-vip-ha-189125-m02" [ad04a007-45f2-4a01-97e3-202fa39a028a] Running
	I0818 18:59:22.089049   25471 system_pods.go:61] "kube-vip-ha-189125-m03" [993160f6-c484-4e27-9db6-733bf0839bec] Running
	I0818 18:59:22.089052   25471 system_pods.go:61] "storage-provisioner" [35b948dd-9b74-4f76-9cdb-82e0901fc421] Running
	I0818 18:59:22.089058   25471 system_pods.go:74] duration metric: took 186.266555ms to wait for pod list to return data ...
	I0818 18:59:22.089068   25471 default_sa.go:34] waiting for default service account to be created ...
	I0818 18:59:22.274502   25471 request.go:632] Waited for 185.354556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/default/serviceaccounts
	I0818 18:59:22.274553   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/default/serviceaccounts
	I0818 18:59:22.274557   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:22.274564   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:22.274570   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:22.278326   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:22.278433   25471 default_sa.go:45] found service account: "default"
	I0818 18:59:22.278448   25471 default_sa.go:55] duration metric: took 189.373266ms for default service account to be created ...
	I0818 18:59:22.278457   25471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 18:59:22.474985   25471 request.go:632] Waited for 196.46034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:22.475048   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:22.475055   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:22.475064   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:22.475073   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:22.482161   25471 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0818 18:59:22.489079   25471 system_pods.go:86] 24 kube-system pods found
	I0818 18:59:22.489105   25471 system_pods.go:89] "coredns-6f6b679f8f-7xr26" [d4354313-0e2d-4d96-9cd1-a8f69a4aee26] Running
	I0818 18:59:22.489110   25471 system_pods.go:89] "coredns-6f6b679f8f-q9j97" [1f1c0597-6624-4a3e-8356-7d23555c2809] Running
	I0818 18:59:22.489114   25471 system_pods.go:89] "etcd-ha-189125" [441d8b87-bb19-479f-86a3-eda66e820a81] Running
	I0818 18:59:22.489118   25471 system_pods.go:89] "etcd-ha-189125-m02" [b656f93e-ece8-41c0-b109-584cf52e7b64] Running
	I0818 18:59:22.489121   25471 system_pods.go:89] "etcd-ha-189125-m03" [6e53b8eb-e64c-48db-8b5d-cd7c0dca3be5] Running
	I0818 18:59:22.489125   25471 system_pods.go:89] "kindnet-24xql" [ba1034b3-04c9-4c64-8fde-7b45ea42f21c] Running
	I0818 18:59:22.489128   25471 system_pods.go:89] "kindnet-jwxjh" [086477c9-e6eb-403e-adc7-b15347918484] Running
	I0818 18:59:22.489132   25471 system_pods.go:89] "kindnet-qhnpv" [b23c4910-6e34-46ec-98f2-60ec7ebdd064] Running
	I0818 18:59:22.489135   25471 system_pods.go:89] "kube-apiserver-ha-189125" [707fe85b-0545-4306-aa6f-22580ddb6203] Running
	I0818 18:59:22.489138   25471 system_pods.go:89] "kube-apiserver-ha-189125-m02" [91926546-4ebb-4e81-a0eb-ffaff8d05fdc] Running
	I0818 18:59:22.489142   25471 system_pods.go:89] "kube-apiserver-ha-189125-m03" [51f30627-fb00-4c82-a07f-e4b43a1e1575] Running
	I0818 18:59:22.489146   25471 system_pods.go:89] "kube-controller-manager-ha-189125" [97597204-06d9-4bd5-946d-3f429d2f0d35] Running
	I0818 18:59:22.489153   25471 system_pods.go:89] "kube-controller-manager-ha-189125-m02" [1a866408-5605-49f1-b183-a0c438685633] Running
	I0818 18:59:22.489157   25471 system_pods.go:89] "kube-controller-manager-ha-189125-m03" [128f040d-6a09-4c72-bf20-b7289d2a0708] Running
	I0818 18:59:22.489161   25471 system_pods.go:89] "kube-proxy-22f8v" [446b7123-e92b-4ce3-b3a4-d096e00ea7e9] Running
	I0818 18:59:22.489165   25471 system_pods.go:89] "kube-proxy-96xwx" [c3f6dfae-e097-4889-933b-433f1b6b78fe] Running
	I0818 18:59:22.489172   25471 system_pods.go:89] "kube-proxy-scwlr" [03131eab-be49-4cb1-a0a6-1349f0f8eef7] Running
	I0818 18:59:22.489176   25471 system_pods.go:89] "kube-scheduler-ha-189125" [48202e0e-cebc-47fd-b18a-1dc6372caf8a] Running
	I0818 18:59:22.489179   25471 system_pods.go:89] "kube-scheduler-ha-189125-m02" [cc583916-30b6-46a6-ab8a-651f68065443] Running
	I0818 18:59:22.489185   25471 system_pods.go:89] "kube-scheduler-ha-189125-m03" [c73cba87-81c0-4389-94f3-21b49a085a05] Running
	I0818 18:59:22.489188   25471 system_pods.go:89] "kube-vip-ha-189125" [0546880a-99fa-4d9a-a754-586b3b7921ee] Running
	I0818 18:59:22.489194   25471 system_pods.go:89] "kube-vip-ha-189125-m02" [ad04a007-45f2-4a01-97e3-202fa39a028a] Running
	I0818 18:59:22.489197   25471 system_pods.go:89] "kube-vip-ha-189125-m03" [993160f6-c484-4e27-9db6-733bf0839bec] Running
	I0818 18:59:22.489202   25471 system_pods.go:89] "storage-provisioner" [35b948dd-9b74-4f76-9cdb-82e0901fc421] Running
	I0818 18:59:22.489207   25471 system_pods.go:126] duration metric: took 210.743641ms to wait for k8s-apps to be running ...
	I0818 18:59:22.489216   25471 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 18:59:22.489259   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:59:22.504355   25471 system_svc.go:56] duration metric: took 15.129698ms WaitForService to wait for kubelet
	I0818 18:59:22.504386   25471 kubeadm.go:582] duration metric: took 23.628804308s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:59:22.504409   25471 node_conditions.go:102] verifying NodePressure condition ...
	I0818 18:59:22.674529   25471 request.go:632] Waited for 170.025672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes
	I0818 18:59:22.674579   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes
	I0818 18:59:22.674584   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:22.674591   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:22.674596   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:22.678464   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:22.679432   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:59:22.679453   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:59:22.679465   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:59:22.679469   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:59:22.679473   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:59:22.679476   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:59:22.679480   25471 node_conditions.go:105] duration metric: took 175.058999ms to run NodePressure ...
	I0818 18:59:22.679497   25471 start.go:241] waiting for startup goroutines ...
	I0818 18:59:22.679519   25471 start.go:255] writing updated cluster config ...
	I0818 18:59:22.679798   25471 ssh_runner.go:195] Run: rm -f paused
	I0818 18:59:22.731773   25471 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 18:59:22.733662   25471 out.go:177] * Done! kubectl is now configured to use "ha-189125" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.287036651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007782287008308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d05d52e7-b897-4e45-af8d-5a463c83e659 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.288785008Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66429e4e-2cd7-4c6a-a010-445f14abb9aa name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.288867689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66429e4e-2cd7-4c6a-a010-445f14abb9aa name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.289251831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724007567164495295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379297265805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379300678582,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135c67495a2cb89c56cca8caa093d8714a7ece48cf73f39e05fc0621bed72a37,PodSandboxId:2c884bafa871e9c85f2aea2fb886dbb448272034e6a94d3664290ffe5f8855fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724007379193169633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724007367338546266,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172400736
3376529475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9,PodSandboxId:04309b5215c4dc8fe94f1ba5fdb3ac8c79160d733be44be461dc6a09e6064091,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172400735438
7697025,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb7d6df05e3ce11ba7b3990f13150037,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724007351593170943,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0,PodSandboxId:05702b9002160611e66e662a1b238091c7a6f7a831c1393eab43feff845a4b73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724007351541675105,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724007351506819073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036,PodSandboxId:3e5f93e63a1d2a9b39ac0e4225131948fd1257f41a95a2e7da309f7c12bb103c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724007351474718818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66429e4e-2cd7-4c6a-a010-445f14abb9aa name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.336048612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8c110f28-bd6e-42a8-ae6a-607ef63341c9 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.336198501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8c110f28-bd6e-42a8-ae6a-607ef63341c9 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.337662960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4754cd55-ccf0-4c7a-9f7c-e4f5fbba7a21 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.338173178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007782338150051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4754cd55-ccf0-4c7a-9f7c-e4f5fbba7a21 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.338711679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc111f5e-eecd-4a22-8238-38290c38f0ca name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.338775429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc111f5e-eecd-4a22-8238-38290c38f0ca name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.339024445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724007567164495295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379297265805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379300678582,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135c67495a2cb89c56cca8caa093d8714a7ece48cf73f39e05fc0621bed72a37,PodSandboxId:2c884bafa871e9c85f2aea2fb886dbb448272034e6a94d3664290ffe5f8855fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724007379193169633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724007367338546266,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172400736
3376529475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9,PodSandboxId:04309b5215c4dc8fe94f1ba5fdb3ac8c79160d733be44be461dc6a09e6064091,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172400735438
7697025,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb7d6df05e3ce11ba7b3990f13150037,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724007351593170943,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0,PodSandboxId:05702b9002160611e66e662a1b238091c7a6f7a831c1393eab43feff845a4b73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724007351541675105,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724007351506819073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036,PodSandboxId:3e5f93e63a1d2a9b39ac0e4225131948fd1257f41a95a2e7da309f7c12bb103c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724007351474718818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc111f5e-eecd-4a22-8238-38290c38f0ca name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.388627571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db4d086e-1be7-4ce6-a375-7e852b177c26 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.388699198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db4d086e-1be7-4ce6-a375-7e852b177c26 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.390483976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cc88fc7-ec74-4a4d-bc67-16c337515289 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.391177500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007782391139397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cc88fc7-ec74-4a4d-bc67-16c337515289 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.391950025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aec8a699-6e89-4af7-ac77-7efd9b674043 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.392009093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aec8a699-6e89-4af7-ac77-7efd9b674043 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.392334642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724007567164495295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379297265805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379300678582,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135c67495a2cb89c56cca8caa093d8714a7ece48cf73f39e05fc0621bed72a37,PodSandboxId:2c884bafa871e9c85f2aea2fb886dbb448272034e6a94d3664290ffe5f8855fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724007379193169633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724007367338546266,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172400736
3376529475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9,PodSandboxId:04309b5215c4dc8fe94f1ba5fdb3ac8c79160d733be44be461dc6a09e6064091,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172400735438
7697025,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb7d6df05e3ce11ba7b3990f13150037,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724007351593170943,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0,PodSandboxId:05702b9002160611e66e662a1b238091c7a6f7a831c1393eab43feff845a4b73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724007351541675105,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724007351506819073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036,PodSandboxId:3e5f93e63a1d2a9b39ac0e4225131948fd1257f41a95a2e7da309f7c12bb103c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724007351474718818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aec8a699-6e89-4af7-ac77-7efd9b674043 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.436035588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e290017c-cba2-445d-9d76-ffed8331d1e8 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.436178359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e290017c-cba2-445d-9d76-ffed8331d1e8 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.437935486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1801bf1-4903-49a2-b93a-b85c0bc64c71 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.438485094Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007782438457829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1801bf1-4903-49a2-b93a-b85c0bc64c71 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.439170313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35faf290-d57c-4bbb-906b-c281e0d6ae19 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.439228578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35faf290-d57c-4bbb-906b-c281e0d6ae19 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:03:02 ha-189125 crio[685]: time="2024-08-18 19:03:02.439466646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724007567164495295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379297265805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379300678582,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135c67495a2cb89c56cca8caa093d8714a7ece48cf73f39e05fc0621bed72a37,PodSandboxId:2c884bafa871e9c85f2aea2fb886dbb448272034e6a94d3664290ffe5f8855fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724007379193169633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724007367338546266,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172400736
3376529475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9,PodSandboxId:04309b5215c4dc8fe94f1ba5fdb3ac8c79160d733be44be461dc6a09e6064091,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172400735438
7697025,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb7d6df05e3ce11ba7b3990f13150037,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724007351593170943,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0,PodSandboxId:05702b9002160611e66e662a1b238091c7a6f7a831c1393eab43feff845a4b73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724007351541675105,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724007351506819073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036,PodSandboxId:3e5f93e63a1d2a9b39ac0e4225131948fd1257f41a95a2e7da309f7c12bb103c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724007351474718818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35faf290-d57c-4bbb-906b-c281e0d6ae19 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1cbf1a420990c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8cdf7a8433c4d       busybox-7dff88458-kxdwj
	181bcd36f89b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   c4e0fe307dc97       coredns-6f6b679f8f-q9j97
	f095c1d3ba818       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0e090955bb301       coredns-6f6b679f8f-7xr26
	135c67495a2cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   2c884bafa871e       storage-provisioner
	197dd2bffa6c8       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   c93b973b05129       kindnet-jwxjh
	d3f078fad6871       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   c28cd1212a8c0       kube-proxy-96xwx
	f9e43e0af59e6       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   04309b5215c4d       kube-vip-ha-189125
	79fc87641651d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   b20bbedf6c011       etcd-ha-189125
	972d7a97ac9ef       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   05702b9002160       kube-controller-manager-ha-189125
	8eb7a6513c9b9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   6fe0bbacb48d2       kube-scheduler-ha-189125
	2d4a0eeafb631       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   3e5f93e63a1d2       kube-apiserver-ha-189125
	
	
	==> coredns [181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2] <==
	[INFO] 10.244.1.2:55994 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151302s
	[INFO] 10.244.1.2:48950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013492s
	[INFO] 10.244.1.2:59880 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115335s
	[INFO] 10.244.2.2:57275 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233377s
	[INFO] 10.244.2.2:56571 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00135054s
	[INFO] 10.244.2.2:43437 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086979s
	[INFO] 10.244.0.4:53861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002025942s
	[INFO] 10.244.0.4:36847 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001326246s
	[INFO] 10.244.0.4:36223 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073856s
	[INFO] 10.244.0.4:53397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051079s
	[INFO] 10.244.0.4:60257 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077527s
	[INFO] 10.244.1.2:36105 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142033s
	[INFO] 10.244.2.2:43159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120043s
	[INFO] 10.244.2.2:48451 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105513s
	[INFO] 10.244.2.2:40617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090209s
	[INFO] 10.244.2.2:53467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079345s
	[INFO] 10.244.0.4:34375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009177s
	[INFO] 10.244.0.4:47256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098542s
	[INFO] 10.244.0.4:38739 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087517s
	[INFO] 10.244.1.2:44329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157424s
	[INFO] 10.244.1.2:52970 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000328904s
	[INFO] 10.244.2.2:35139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010364s
	[INFO] 10.244.2.2:51553 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000143049s
	[INFO] 10.244.0.4:55737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097209s
	[INFO] 10.244.0.4:56754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040314s
	
	
	==> coredns [f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107] <==
	[INFO] 10.244.2.2:41178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198322s
	[INFO] 10.244.2.2:50482 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000139628s
	[INFO] 10.244.2.2:44346 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000148959s
	[INFO] 10.244.0.4:60109 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000098283s
	[INFO] 10.244.0.4:50813 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001489232s
	[INFO] 10.244.1.2:44640 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003618953s
	[INFO] 10.244.1.2:37984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161286s
	[INFO] 10.244.2.2:55904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150006s
	[INFO] 10.244.2.2:38276 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00189507s
	[INFO] 10.244.2.2:42054 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179179s
	[INFO] 10.244.2.2:35911 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190164s
	[INFO] 10.244.2.2:52357 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163157s
	[INFO] 10.244.0.4:38374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136266s
	[INFO] 10.244.0.4:33983 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103666s
	[INFO] 10.244.0.4:42233 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069982s
	[INFO] 10.244.1.2:39502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134749s
	[INFO] 10.244.1.2:38715 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102799s
	[INFO] 10.244.1.2:55122 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135608s
	[INFO] 10.244.0.4:56934 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000488s
	[INFO] 10.244.1.2:45200 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251667s
	[INFO] 10.244.1.2:35239 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131205s
	[INFO] 10.244.2.2:47108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152092s
	[INFO] 10.244.2.2:45498 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093397s
	[INFO] 10.244.0.4:52889 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059058s
	[INFO] 10.244.0.4:55998 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042989s
	
	
	==> describe nodes <==
	Name:               ha-189125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T18_55_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:55:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:02:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 18:59:32 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 18:59:32 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 18:59:32 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 18:59:32 +0000   Sun, 18 Aug 2024 18:56:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.49
	  Hostname:    ha-189125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9520f8bfe7ab47fca640aa213dbc51c5
	  System UUID:                9520f8bf-e7ab-47fc-a640-aa213dbc51c5
	  Boot ID:                    d5000132-c81a-4416-b5cd-bc4cc58a7c4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kxdwj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-6f6b679f8f-7xr26             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m
	  kube-system                 coredns-6f6b679f8f-q9j97             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m
	  kube-system                 etcd-ha-189125                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m7s
	  kube-system                 kindnet-jwxjh                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m1s
	  kube-system                 kube-apiserver-ha-189125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-controller-manager-ha-189125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-proxy-96xwx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m1s
	  kube-system                 kube-scheduler-ha-189125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-vip-ha-189125                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m7s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m59s  kube-proxy       
	  Normal  Starting                 7m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m5s   kubelet          Node ha-189125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m5s   kubelet          Node ha-189125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m5s   kubelet          Node ha-189125 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m1s   node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal  NodeReady                6m44s  kubelet          Node ha-189125 status is now: NodeReady
	  Normal  RegisteredNode           5m13s  node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal  RegisteredNode           3m59s  node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	
	
	Name:               ha-189125-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T18_57_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:57:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:00:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 18 Aug 2024 18:59:43 +0000   Sun, 18 Aug 2024 19:01:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 18 Aug 2024 18:59:43 +0000   Sun, 18 Aug 2024 19:01:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 18 Aug 2024 18:59:43 +0000   Sun, 18 Aug 2024 19:01:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 18 Aug 2024 18:59:43 +0000   Sun, 18 Aug 2024 19:01:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    ha-189125-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3324dc2b927f496881437c52ed831dff
	  System UUID:                3324dc2b-927f-4968-8143-7c52ed831dff
	  Boot ID:                    6101e739-12c5-4cc4-a553-76e9cbc2860b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8bwfj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-189125-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-qhnpv                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-189125-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-controller-manager-ha-189125-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-proxy-scwlr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-189125-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 kube-vip-ha-189125-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m22s)  kubelet          Node ha-189125-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m22s)  kubelet          Node ha-189125-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m22s)  kubelet          Node ha-189125-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  NodeNotReady             106s                   node-controller  Node ha-189125-m02 status is now: NodeNotReady
	
	
	Name:               ha-189125-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T18_58_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:58:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:03:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 18:59:56 +0000   Sun, 18 Aug 2024 18:58:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 18:59:56 +0000   Sun, 18 Aug 2024 18:58:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 18:59:56 +0000   Sun, 18 Aug 2024 18:58:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 18:59:56 +0000   Sun, 18 Aug 2024 18:59:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-189125-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d3ec6cee66841f19e0d0001d5bf49e3
	  System UUID:                4d3ec6ce-e668-41f1-9e0d-0001d5bf49e3
	  Boot ID:                    585df22f-cf7d-498d-8ff9-1aca3ea7e00a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fvdcn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-189125-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m6s
	  kube-system                 kindnet-24xql                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-189125-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-189125-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-proxy-22f8v                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-189125-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-vip-ha-189125-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-189125-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-189125-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-189125-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	
	
	Name:               ha-189125-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T19_00_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:00:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:02:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:00:31 +0000   Sun, 18 Aug 2024 19:00:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:00:31 +0000   Sun, 18 Aug 2024 19:00:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:00:31 +0000   Sun, 18 Aug 2024 19:00:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:00:31 +0000   Sun, 18 Aug 2024 19:00:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    ha-189125-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aaeec6aea01d4746832fda2dc541437c
	  System UUID:                aaeec6ae-a01d-4746-832f-da2dc541437c
	  Boot ID:                    2ec6b825-44fb-4ba0-9681-61c7a55de5a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24hmx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-krtg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m3s)  kubelet          Node ha-189125-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m3s)  kubelet          Node ha-189125-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m3s)  kubelet          Node ha-189125-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-189125-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug18 18:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050548] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039074] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.782036] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.495044] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.591693] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.511172] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053311] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195743] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.133817] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.270401] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.027475] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +4.080385] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.059467] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140089] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.075123] kauditd_printk_skb: 79 callbacks suppressed
	[Aug18 18:56] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.023234] kauditd_printk_skb: 36 callbacks suppressed
	[Aug18 18:57] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125] <==
	{"level":"warn","ts":"2024-08-18T19:03:02.639480Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.717323Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.723988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.732692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.737061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.748610Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.757347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.768210Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.772328Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.775748Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.781803Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.789406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.796331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.800072Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.803678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.811168Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.817414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.819399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.827415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.832139Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.835844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.839968Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.846770Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.854715Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:03:02.916783Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:03:02 up 7 min,  0 users,  load average: 0.16, 0.18, 0.10
	Linux ha-189125 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6] <==
	I0818 19:02:28.453288       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:02:38.453497       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:02:38.453549       1 main.go:299] handling current node
	I0818 19:02:38.453566       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:02:38.453574       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:02:38.453712       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:02:38.453735       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:02:38.453791       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:02:38.453819       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:02:48.451608       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:02:48.451679       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:02:48.451873       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:02:48.451900       1 main.go:299] handling current node
	I0818 19:02:48.451923       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:02:48.451928       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:02:48.451984       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:02:48.451989       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:02:58.453467       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:02:58.453656       1 main.go:299] handling current node
	I0818 19:02:58.453699       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:02:58.453720       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:02:58.453892       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:02:58.453915       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:02:58.453999       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:02:58.454019       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036] <==
	W0818 18:55:56.382944       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.49]
	I0818 18:55:56.383990       1 controller.go:615] quota admission added evaluator for: endpoints
	I0818 18:55:56.390416       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0818 18:55:56.606559       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0818 18:55:57.547886       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0818 18:55:57.562444       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0818 18:55:57.576559       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0818 18:56:01.366139       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0818 18:56:02.263130       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0818 18:59:28.369785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54992: use of closed network connection
	E0818 18:59:28.558513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55012: use of closed network connection
	E0818 18:59:28.742483       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55034: use of closed network connection
	E0818 18:59:28.943998       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55042: use of closed network connection
	E0818 18:59:29.123328       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55064: use of closed network connection
	E0818 18:59:29.294453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55086: use of closed network connection
	E0818 18:59:29.469439       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55094: use of closed network connection
	E0818 18:59:29.648527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55102: use of closed network connection
	E0818 18:59:29.809407       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55106: use of closed network connection
	E0818 18:59:30.094649       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55150: use of closed network connection
	E0818 18:59:30.279626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55178: use of closed network connection
	E0818 18:59:30.453032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55194: use of closed network connection
	E0818 18:59:30.628588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55198: use of closed network connection
	E0818 18:59:30.795912       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55206: use of closed network connection
	E0818 18:59:30.992016       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55214: use of closed network connection
	W0818 19:00:56.400647       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.49]
	
	
	==> kube-controller-manager [972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0] <==
	I0818 19:00:00.302291       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-189125-m04\" does not exist"
	I0818 19:00:00.340342       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-189125-m04" podCIDRs=["10.244.3.0/24"]
	I0818 19:00:00.340413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:00.340450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:00.617248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:00.991453       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:01.377709       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-189125-m04"
	I0818 19:00:01.485256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:03.700873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:03.728731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:04.827124       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:05.076796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:10.560341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:21.897937       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-189125-m04"
	I0818 19:00:21.898570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:21.918149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:23.721783       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:31.086068       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:01:16.408611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-189125-m04"
	I0818 19:01:16.409701       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m02"
	I0818 19:01:16.439469       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m02"
	I0818 19:01:16.507571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.846361ms"
	I0818 19:01:16.507745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.486µs"
	I0818 19:01:18.818484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m02"
	I0818 19:01:21.592183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m02"
	
	
	==> kube-proxy [d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:56:03.608539       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:56:03.625403       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.49"]
	E0818 18:56:03.625483       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:56:03.667004       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:56:03.667048       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:56:03.667128       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:56:03.669742       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:56:03.670274       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:56:03.670298       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:56:03.672205       1 config.go:197] "Starting service config controller"
	I0818 18:56:03.672291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:56:03.672527       1 config.go:326] "Starting node config controller"
	I0818 18:56:03.672553       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:56:03.673050       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:56:03.673190       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:56:03.773288       1 shared_informer.go:320] Caches are synced for node config
	I0818 18:56:03.773384       1 shared_informer.go:320] Caches are synced for service config
	I0818 18:56:03.776665       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058] <==
	W0818 18:55:55.891235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 18:55:55.891329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:55:55.970802       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 18:55:55.970854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 18:55:55.975170       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 18:55:55.975215       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0818 18:55:58.645332       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0818 18:58:54.897380       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-22f8v\": pod kube-proxy-22f8v is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-22f8v" node="ha-189125-m03"
	E0818 18:58:54.897530       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 446b7123-e92b-4ce3-b3a4-d096e00ea7e9(kube-system/kube-proxy-22f8v) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-22f8v"
	E0818 18:58:54.897583       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-22f8v\": pod kube-proxy-22f8v is already assigned to node \"ha-189125-m03\"" pod="kube-system/kube-proxy-22f8v"
	I0818 18:58:54.897633       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-22f8v" node="ha-189125-m03"
	E0818 18:58:54.898809       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24xql\": pod kindnet-24xql is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-24xql" node="ha-189125-m03"
	E0818 18:58:54.898876       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba1034b3-04c9-4c64-8fde-7b45ea42f21c(kube-system/kindnet-24xql) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24xql"
	E0818 18:58:54.898900       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24xql\": pod kindnet-24xql is already assigned to node \"ha-189125-m03\"" pod="kube-system/kindnet-24xql"
	I0818 18:58:54.898918       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24xql" node="ha-189125-m03"
	E0818 18:59:23.602753       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8bwfj\": pod busybox-7dff88458-8bwfj is already assigned to node \"ha-189125-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8bwfj" node="ha-189125-m02"
	E0818 18:59:23.602879       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8bwfj\": pod busybox-7dff88458-8bwfj is already assigned to node \"ha-189125-m02\"" pod="default/busybox-7dff88458-8bwfj"
	E0818 18:59:23.652419       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fvdcn\": pod busybox-7dff88458-fvdcn is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fvdcn" node="ha-189125-m03"
	E0818 18:59:23.652848       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 19fc5686-7021-4b6f-a097-71f7b6d6a76e(default/busybox-7dff88458-fvdcn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fvdcn"
	E0818 18:59:23.652953       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fvdcn\": pod busybox-7dff88458-fvdcn is already assigned to node \"ha-189125-m03\"" pod="default/busybox-7dff88458-fvdcn"
	I0818 18:59:23.653004       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fvdcn" node="ha-189125-m03"
	E0818 18:59:23.653552       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kxdwj\": pod busybox-7dff88458-kxdwj is already assigned to node \"ha-189125\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-kxdwj" node="ha-189125"
	E0818 18:59:23.655579       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e2ebdc21-75ca-43ac-86f2-7c492eefe97d(default/busybox-7dff88458-kxdwj) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-kxdwj"
	E0818 18:59:23.655718       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kxdwj\": pod busybox-7dff88458-kxdwj is already assigned to node \"ha-189125\"" pod="default/busybox-7dff88458-kxdwj"
	I0818 18:59:23.655773       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-kxdwj" node="ha-189125"
	
	
	==> kubelet <==
	Aug 18 19:01:47 ha-189125 kubelet[1332]: E0818 19:01:47.615526    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007707615186798,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:01:57 ha-189125 kubelet[1332]: E0818 19:01:57.535412    1332 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:01:57 ha-189125 kubelet[1332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:01:57 ha-189125 kubelet[1332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:01:57 ha-189125 kubelet[1332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:01:57 ha-189125 kubelet[1332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:01:57 ha-189125 kubelet[1332]: E0818 19:01:57.617163    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007717616793475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:01:57 ha-189125 kubelet[1332]: E0818 19:01:57.617203    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007717616793475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:07 ha-189125 kubelet[1332]: E0818 19:02:07.619927    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007727619414726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:07 ha-189125 kubelet[1332]: E0818 19:02:07.620456    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007727619414726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:17 ha-189125 kubelet[1332]: E0818 19:02:17.622834    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007737622532030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:17 ha-189125 kubelet[1332]: E0818 19:02:17.622880    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007737622532030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:27 ha-189125 kubelet[1332]: E0818 19:02:27.624698    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007747624401694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:27 ha-189125 kubelet[1332]: E0818 19:02:27.624743    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007747624401694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:37 ha-189125 kubelet[1332]: E0818 19:02:37.626597    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007757626261504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:37 ha-189125 kubelet[1332]: E0818 19:02:37.626619    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007757626261504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:47 ha-189125 kubelet[1332]: E0818 19:02:47.629683    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007767629013053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:47 ha-189125 kubelet[1332]: E0818 19:02:47.629733    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007767629013053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:57 ha-189125 kubelet[1332]: E0818 19:02:57.530806    1332 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:02:57 ha-189125 kubelet[1332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:02:57 ha-189125 kubelet[1332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:02:57 ha-189125 kubelet[1332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:02:57 ha-189125 kubelet[1332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:02:57 ha-189125 kubelet[1332]: E0818 19:02:57.632173    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007777631413993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:57 ha-189125 kubelet[1332]: E0818 19:02:57.632257    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007777631413993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-189125 -n ha-189125
helpers_test.go:261: (dbg) Run:  kubectl --context ha-189125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 3 (3.210864522s)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-189125-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:03:07.469237   30458 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:03:07.469356   30458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:07.469366   30458 out.go:358] Setting ErrFile to fd 2...
	I0818 19:03:07.469373   30458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:07.469554   30458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:03:07.469735   30458 out.go:352] Setting JSON to false
	I0818 19:03:07.469766   30458 mustload.go:65] Loading cluster: ha-189125
	I0818 19:03:07.469867   30458 notify.go:220] Checking for updates...
	I0818 19:03:07.470220   30458 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:03:07.470236   30458 status.go:255] checking status of ha-189125 ...
	I0818 19:03:07.470657   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:07.470718   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:07.489060   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32973
	I0818 19:03:07.489471   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:07.490050   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:07.490076   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:07.490441   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:07.490660   30458 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:03:07.492270   30458 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:03:07.492295   30458 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:07.492662   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:07.492700   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:07.507687   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35817
	I0818 19:03:07.508133   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:07.508591   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:07.508622   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:07.508902   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:07.509080   30458 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:03:07.512230   30458 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:07.512645   30458 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:07.512684   30458 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:07.512806   30458 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:07.513214   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:07.513263   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:07.527607   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I0818 19:03:07.527983   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:07.528401   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:07.528422   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:07.528733   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:07.528921   30458 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:03:07.529125   30458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:07.529154   30458 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:03:07.531489   30458 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:07.531900   30458 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:07.531930   30458 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:07.532036   30458 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:03:07.532215   30458 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:03:07.532362   30458 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:03:07.532531   30458 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:03:07.612877   30458 ssh_runner.go:195] Run: systemctl --version
	I0818 19:03:07.618916   30458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:07.644657   30458 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:07.644688   30458 api_server.go:166] Checking apiserver status ...
	I0818 19:03:07.644722   30458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:07.662028   30458 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0818 19:03:07.673677   30458 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:07.673736   30458 ssh_runner.go:195] Run: ls
	I0818 19:03:07.678605   30458 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:07.684654   30458 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:07.684675   30458 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:03:07.684684   30458 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:07.684705   30458 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:03:07.685007   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:07.685041   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:07.701909   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0818 19:03:07.702328   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:07.702811   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:07.702834   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:07.703143   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:07.703323   30458 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:03:07.704867   30458 status.go:330] ha-189125-m02 host status = "Running" (err=<nil>)
	I0818 19:03:07.704886   30458 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:07.705159   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:07.705191   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:07.719890   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35657
	I0818 19:03:07.720346   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:07.720774   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:07.720813   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:07.721075   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:07.721237   30458 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 19:03:07.723990   30458 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:07.724332   30458 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:07.724356   30458 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:07.724511   30458 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:07.724847   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:07.724889   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:07.740343   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0818 19:03:07.740762   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:07.741210   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:07.741228   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:07.741534   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:07.741701   30458 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 19:03:07.741935   30458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:07.741955   30458 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 19:03:07.744773   30458 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:07.745204   30458 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:07.745227   30458 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:07.745335   30458 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 19:03:07.745517   30458 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 19:03:07.745682   30458 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 19:03:07.745804   30458 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	W0818 19:03:10.267738   30458 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:10.267832   30458 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0818 19:03:10.267853   30458 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:10.267879   30458 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 19:03:10.267900   30458 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:10.267912   30458 status.go:255] checking status of ha-189125-m03 ...
	I0818 19:03:10.268385   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:10.268448   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:10.284091   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44683
	I0818 19:03:10.284600   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:10.285119   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:10.285145   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:10.285443   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:10.285619   30458 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:03:10.287220   30458 status.go:330] ha-189125-m03 host status = "Running" (err=<nil>)
	I0818 19:03:10.287233   30458 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:10.287616   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:10.287659   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:10.301975   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0818 19:03:10.302379   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:10.302906   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:10.302925   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:10.303252   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:10.303441   30458 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 19:03:10.305804   30458 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:10.306203   30458 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:10.306229   30458 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:10.306335   30458 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:10.306662   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:10.306698   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:10.321242   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0818 19:03:10.321581   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:10.322056   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:10.322078   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:10.322361   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:10.322552   30458 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:03:10.322734   30458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:10.322758   30458 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:03:10.325849   30458 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:10.326291   30458 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:10.326326   30458 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:10.326439   30458 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:03:10.326619   30458 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:03:10.326767   30458 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:03:10.326886   30458 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:03:10.418604   30458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:10.442654   30458 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:10.442686   30458 api_server.go:166] Checking apiserver status ...
	I0818 19:03:10.442725   30458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:10.456391   30458 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0818 19:03:10.467491   30458 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:10.467561   30458 ssh_runner.go:195] Run: ls
	I0818 19:03:10.471774   30458 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:10.478247   30458 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:10.478269   30458 status.go:422] ha-189125-m03 apiserver status = Running (err=<nil>)
	I0818 19:03:10.478280   30458 status.go:257] ha-189125-m03 status: &{Name:ha-189125-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:10.478300   30458 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:03:10.478670   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:10.478710   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:10.494488   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I0818 19:03:10.494907   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:10.495396   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:10.495419   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:10.495783   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:10.495979   30458 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:03:10.497435   30458 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:03:10.497449   30458 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:10.497781   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:10.497817   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:10.512390   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
	I0818 19:03:10.512729   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:10.513120   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:10.513139   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:10.513412   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:10.513593   30458 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:03:10.516602   30458 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:10.517008   30458 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:10.517041   30458 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:10.517191   30458 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:10.517540   30458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:10.517577   30458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:10.532567   30458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I0818 19:03:10.532941   30458 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:10.533355   30458 main.go:141] libmachine: Using API Version  1
	I0818 19:03:10.533370   30458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:10.533663   30458 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:10.533826   30458 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:03:10.534006   30458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:10.534036   30458 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:03:10.536301   30458 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:10.536664   30458 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:10.536685   30458 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:10.536842   30458 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:03:10.537004   30458 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:03:10.537271   30458 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:03:10.537415   30458 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:03:10.622971   30458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:10.637918   30458 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 3 (5.415569404s)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-189125-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:03:11.386321   30543 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:03:11.386527   30543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:11.386536   30543 out.go:358] Setting ErrFile to fd 2...
	I0818 19:03:11.386540   30543 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:11.386707   30543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:03:11.386866   30543 out.go:352] Setting JSON to false
	I0818 19:03:11.386891   30543 mustload.go:65] Loading cluster: ha-189125
	I0818 19:03:11.387019   30543 notify.go:220] Checking for updates...
	I0818 19:03:11.387414   30543 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:03:11.387433   30543 status.go:255] checking status of ha-189125 ...
	I0818 19:03:11.387882   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:11.387943   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:11.406709   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I0818 19:03:11.407107   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:11.407762   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:11.407786   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:11.408252   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:11.408489   30543 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:03:11.409967   30543 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:03:11.409983   30543 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:11.410367   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:11.410413   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:11.425806   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I0818 19:03:11.426320   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:11.426944   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:11.426972   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:11.427314   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:11.427511   30543 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:03:11.430497   30543 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:11.430908   30543 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:11.430940   30543 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:11.431110   30543 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:11.431471   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:11.431513   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:11.447239   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I0818 19:03:11.447778   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:11.448311   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:11.448335   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:11.448625   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:11.448804   30543 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:03:11.448985   30543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:11.449013   30543 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:03:11.451645   30543 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:11.452107   30543 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:11.452134   30543 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:11.452317   30543 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:03:11.452557   30543 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:03:11.452691   30543 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:03:11.452843   30543 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:03:11.530855   30543 ssh_runner.go:195] Run: systemctl --version
	I0818 19:03:11.539048   30543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:11.557709   30543 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:11.557741   30543 api_server.go:166] Checking apiserver status ...
	I0818 19:03:11.557773   30543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:11.571825   30543 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0818 19:03:11.584732   30543 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:11.584782   30543 ssh_runner.go:195] Run: ls
	I0818 19:03:11.589091   30543 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:11.595049   30543 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:11.595088   30543 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:03:11.595106   30543 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:11.595127   30543 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:03:11.595498   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:11.595533   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:11.610779   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0818 19:03:11.611200   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:11.611732   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:11.611752   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:11.612062   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:11.612278   30543 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:03:11.613938   30543 status.go:330] ha-189125-m02 host status = "Running" (err=<nil>)
	I0818 19:03:11.613950   30543 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:11.614230   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:11.614259   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:11.630223   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0818 19:03:11.630704   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:11.631254   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:11.631278   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:11.631594   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:11.631761   30543 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 19:03:11.634905   30543 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:11.635488   30543 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:11.635515   30543 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:11.635729   30543 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:11.636127   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:11.636172   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:11.651687   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43053
	I0818 19:03:11.652097   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:11.652528   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:11.652548   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:11.652854   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:11.653040   30543 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 19:03:11.653263   30543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:11.653286   30543 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 19:03:11.655871   30543 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:11.656362   30543 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:11.656387   30543 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:11.656485   30543 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 19:03:11.656670   30543 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 19:03:11.656818   30543 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 19:03:11.656938   30543 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	W0818 19:03:13.343713   30543 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:13.343785   30543 retry.go:31] will retry after 266.36498ms: dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:16.411644   30543 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:16.411722   30543 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0818 19:03:16.411735   30543 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:16.411757   30543 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 19:03:16.411772   30543 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:16.411794   30543 status.go:255] checking status of ha-189125-m03 ...
	I0818 19:03:16.412088   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:16.412125   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:16.428312   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46179
	I0818 19:03:16.428772   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:16.429354   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:16.429374   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:16.429722   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:16.429901   30543 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:03:16.431548   30543 status.go:330] ha-189125-m03 host status = "Running" (err=<nil>)
	I0818 19:03:16.431566   30543 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:16.431859   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:16.431893   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:16.446889   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I0818 19:03:16.447263   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:16.447695   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:16.447717   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:16.448021   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:16.448234   30543 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 19:03:16.451005   30543 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:16.451339   30543 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:16.451365   30543 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:16.451528   30543 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:16.451870   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:16.451907   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:16.466323   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0818 19:03:16.466747   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:16.467173   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:16.467197   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:16.467511   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:16.467690   30543 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:03:16.467879   30543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:16.467898   30543 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:03:16.470881   30543 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:16.471363   30543 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:16.471405   30543 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:16.471620   30543 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:03:16.471808   30543 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:03:16.471955   30543 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:03:16.472123   30543 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:03:16.554668   30543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:16.569240   30543 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:16.569263   30543 api_server.go:166] Checking apiserver status ...
	I0818 19:03:16.569294   30543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:16.583845   30543 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0818 19:03:16.593898   30543 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:16.593954   30543 ssh_runner.go:195] Run: ls
	I0818 19:03:16.598779   30543 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:16.605207   30543 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:16.605230   30543 status.go:422] ha-189125-m03 apiserver status = Running (err=<nil>)
	I0818 19:03:16.605239   30543 status.go:257] ha-189125-m03 status: &{Name:ha-189125-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:16.605253   30543 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:03:16.605606   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:16.605641   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:16.620593   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0818 19:03:16.621043   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:16.621589   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:16.621615   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:16.621906   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:16.622131   30543 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:03:16.623745   30543 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:03:16.623761   30543 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:16.624042   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:16.624076   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:16.638311   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0818 19:03:16.638687   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:16.639158   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:16.639187   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:16.639533   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:16.639728   30543 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:03:16.642454   30543 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:16.642857   30543 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:16.642885   30543 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:16.643019   30543 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:16.643320   30543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:16.643352   30543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:16.657966   30543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41503
	I0818 19:03:16.658373   30543 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:16.658875   30543 main.go:141] libmachine: Using API Version  1
	I0818 19:03:16.658896   30543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:16.659188   30543 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:16.659359   30543 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:03:16.659612   30543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:16.659636   30543 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:03:16.662460   30543 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:16.662837   30543 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:16.662864   30543 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:16.663016   30543 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:03:16.663166   30543 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:03:16.663311   30543 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:03:16.663475   30543 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:03:16.746368   30543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:16.760727   30543 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 3 (5.357439273s)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-189125-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:03:17.591480   30659 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:03:17.591590   30659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:17.591598   30659 out.go:358] Setting ErrFile to fd 2...
	I0818 19:03:17.591603   30659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:17.591791   30659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:03:17.591965   30659 out.go:352] Setting JSON to false
	I0818 19:03:17.591990   30659 mustload.go:65] Loading cluster: ha-189125
	I0818 19:03:17.592027   30659 notify.go:220] Checking for updates...
	I0818 19:03:17.592404   30659 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:03:17.592421   30659 status.go:255] checking status of ha-189125 ...
	I0818 19:03:17.592855   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:17.592895   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:17.612087   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34403
	I0818 19:03:17.612520   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:17.613051   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:17.613077   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:17.613454   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:17.613642   30659 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:03:17.615337   30659 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:03:17.615354   30659 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:17.615647   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:17.615681   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:17.630699   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46221
	I0818 19:03:17.631096   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:17.631516   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:17.631536   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:17.631859   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:17.632014   30659 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:03:17.634696   30659 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:17.635057   30659 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:17.635093   30659 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:17.635205   30659 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:17.635574   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:17.635607   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:17.649759   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33599
	I0818 19:03:17.650075   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:17.650570   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:17.650592   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:17.650860   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:17.651012   30659 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:03:17.651220   30659 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:17.651258   30659 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:03:17.653864   30659 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:17.654252   30659 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:17.654273   30659 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:17.654565   30659 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:03:17.654747   30659 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:03:17.654923   30659 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:03:17.655053   30659 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:03:17.735518   30659 ssh_runner.go:195] Run: systemctl --version
	I0818 19:03:17.741844   30659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:17.755825   30659 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:17.755849   30659 api_server.go:166] Checking apiserver status ...
	I0818 19:03:17.755879   30659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:17.769122   30659 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0818 19:03:17.778418   30659 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:17.778472   30659 ssh_runner.go:195] Run: ls
	I0818 19:03:17.782853   30659 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:17.788881   30659 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:17.788900   30659 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:03:17.788909   30659 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:17.788924   30659 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:03:17.789229   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:17.789265   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:17.804431   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I0818 19:03:17.804845   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:17.805282   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:17.805301   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:17.805571   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:17.805754   30659 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:03:17.807214   30659 status.go:330] ha-189125-m02 host status = "Running" (err=<nil>)
	I0818 19:03:17.807231   30659 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:17.807534   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:17.807563   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:17.821663   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44161
	I0818 19:03:17.822060   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:17.822523   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:17.822550   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:17.822840   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:17.823012   30659 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 19:03:17.825780   30659 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:17.826153   30659 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:17.826176   30659 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:17.826326   30659 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:17.826638   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:17.826671   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:17.841167   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0818 19:03:17.841561   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:17.841963   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:17.841989   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:17.842288   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:17.842475   30659 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 19:03:17.842651   30659 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:17.842672   30659 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 19:03:17.845162   30659 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:17.845539   30659 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:17.845562   30659 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:17.845685   30659 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 19:03:17.845832   30659 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 19:03:17.846023   30659 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 19:03:17.846148   30659 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	W0818 19:03:19.483705   30659 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:19.483763   30659 retry.go:31] will retry after 332.115196ms: dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:22.555697   30659 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:22.555769   30659 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0818 19:03:22.555784   30659 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:22.555796   30659 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 19:03:22.555816   30659 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:22.555823   30659 status.go:255] checking status of ha-189125-m03 ...
	I0818 19:03:22.556108   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:22.556147   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:22.571008   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0818 19:03:22.571450   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:22.571915   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:22.571944   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:22.572271   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:22.572467   30659 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:03:22.573953   30659 status.go:330] ha-189125-m03 host status = "Running" (err=<nil>)
	I0818 19:03:22.573970   30659 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:22.574391   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:22.574436   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:22.588969   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0818 19:03:22.589343   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:22.589798   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:22.589815   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:22.590196   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:22.590407   30659 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 19:03:22.593464   30659 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:22.593889   30659 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:22.593911   30659 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:22.594078   30659 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:22.594514   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:22.594558   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:22.609465   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33395
	I0818 19:03:22.609917   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:22.610420   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:22.610441   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:22.610707   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:22.610867   30659 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:03:22.611031   30659 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:22.611049   30659 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:03:22.613687   30659 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:22.614222   30659 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:22.614245   30659 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:22.614329   30659 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:03:22.614526   30659 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:03:22.614736   30659 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:03:22.614920   30659 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:03:22.698677   30659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:22.715291   30659 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:22.715316   30659 api_server.go:166] Checking apiserver status ...
	I0818 19:03:22.715360   30659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:22.732413   30659 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0818 19:03:22.743043   30659 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:22.743088   30659 ssh_runner.go:195] Run: ls
	I0818 19:03:22.747454   30659 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:22.751757   30659 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:22.751776   30659 status.go:422] ha-189125-m03 apiserver status = Running (err=<nil>)
	I0818 19:03:22.751784   30659 status.go:257] ha-189125-m03 status: &{Name:ha-189125-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:22.751797   30659 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:03:22.752065   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:22.752101   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:22.766846   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33771
	I0818 19:03:22.767256   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:22.767714   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:22.767734   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:22.768070   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:22.768242   30659 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:03:22.769590   30659 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:03:22.769607   30659 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:22.769983   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:22.770022   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:22.783975   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32975
	I0818 19:03:22.784339   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:22.784894   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:22.784914   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:22.785215   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:22.785406   30659 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:03:22.788079   30659 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:22.788483   30659 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:22.788502   30659 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:22.788687   30659 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:22.789024   30659 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:22.789065   30659 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:22.803450   30659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0818 19:03:22.803790   30659 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:22.804217   30659 main.go:141] libmachine: Using API Version  1
	I0818 19:03:22.804237   30659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:22.804517   30659 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:22.804693   30659 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:03:22.804888   30659 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:22.804907   30659 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:03:22.807661   30659 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:22.808069   30659 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:22.808097   30659 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:22.808222   30659 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:03:22.808388   30659 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:03:22.808517   30659 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:03:22.808645   30659 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:03:22.890737   30659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:22.907599   30659 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 3 (4.928090739s)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-189125-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:03:24.164811   30760 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:03:24.164934   30760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:24.164943   30760 out.go:358] Setting ErrFile to fd 2...
	I0818 19:03:24.164948   30760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:24.165120   30760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:03:24.165312   30760 out.go:352] Setting JSON to false
	I0818 19:03:24.165337   30760 mustload.go:65] Loading cluster: ha-189125
	I0818 19:03:24.165669   30760 notify.go:220] Checking for updates...
	I0818 19:03:24.166709   30760 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:03:24.166746   30760 status.go:255] checking status of ha-189125 ...
	I0818 19:03:24.167508   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:24.167546   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:24.183635   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0818 19:03:24.184070   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:24.184575   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:24.184594   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:24.184947   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:24.185192   30760 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:03:24.186587   30760 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:03:24.186600   30760 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:24.186885   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:24.186921   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:24.201874   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36549
	I0818 19:03:24.202376   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:24.202856   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:24.202883   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:24.203219   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:24.203375   30760 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:03:24.206413   30760 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:24.206857   30760 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:24.206887   30760 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:24.207017   30760 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:24.207366   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:24.207417   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:24.222487   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0818 19:03:24.222930   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:24.223471   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:24.223496   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:24.223899   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:24.224112   30760 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:03:24.224326   30760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:24.224362   30760 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:03:24.227680   30760 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:24.228131   30760 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:24.228171   30760 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:24.228292   30760 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:03:24.228475   30760 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:03:24.228624   30760 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:03:24.228770   30760 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:03:24.319854   30760 ssh_runner.go:195] Run: systemctl --version
	I0818 19:03:24.327086   30760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:24.343534   30760 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:24.343567   30760 api_server.go:166] Checking apiserver status ...
	I0818 19:03:24.343601   30760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:24.359008   30760 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0818 19:03:24.369767   30760 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:24.369839   30760 ssh_runner.go:195] Run: ls
	I0818 19:03:24.375232   30760 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:24.381643   30760 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:24.381665   30760 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:03:24.381675   30760 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:24.381692   30760 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:03:24.382009   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:24.382051   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:24.397237   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0818 19:03:24.397700   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:24.398086   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:24.398107   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:24.398535   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:24.398736   30760 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:03:24.400320   30760 status.go:330] ha-189125-m02 host status = "Running" (err=<nil>)
	I0818 19:03:24.400333   30760 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:24.400699   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:24.400738   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:24.415652   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41781
	I0818 19:03:24.416112   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:24.416612   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:24.416640   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:24.416939   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:24.417089   30760 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 19:03:24.419630   30760 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:24.420009   30760 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:24.420030   30760 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:24.420176   30760 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:24.420482   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:24.420541   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:24.434872   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38019
	I0818 19:03:24.435337   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:24.435804   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:24.435825   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:24.436165   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:24.436357   30760 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 19:03:24.436516   30760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:24.436533   30760 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 19:03:24.439370   30760 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:24.439859   30760 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:24.439882   30760 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:24.440053   30760 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 19:03:24.440203   30760 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 19:03:24.440344   30760 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 19:03:24.440498   30760 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	W0818 19:03:25.631674   30760 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:25.631742   30760 retry.go:31] will retry after 149.386074ms: dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:28.699719   30760 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:28.699802   30760 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0818 19:03:28.699825   30760 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:28.699836   30760 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 19:03:28.699878   30760 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:28.699894   30760 status.go:255] checking status of ha-189125-m03 ...
	I0818 19:03:28.700228   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:28.700277   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:28.715541   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0818 19:03:28.715973   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:28.716469   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:28.716500   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:28.716914   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:28.717105   30760 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:03:28.719134   30760 status.go:330] ha-189125-m03 host status = "Running" (err=<nil>)
	I0818 19:03:28.719151   30760 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:28.719585   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:28.719627   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:28.734248   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39493
	I0818 19:03:28.734681   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:28.735132   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:28.735154   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:28.735475   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:28.735676   30760 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 19:03:28.739059   30760 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:28.739637   30760 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:28.739663   30760 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:28.739796   30760 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:28.740107   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:28.740150   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:28.755851   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39471
	I0818 19:03:28.756244   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:28.756700   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:28.756728   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:28.757065   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:28.757250   30760 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:03:28.757403   30760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:28.757423   30760 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:03:28.760031   30760 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:28.760398   30760 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:28.760433   30760 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:28.760544   30760 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:03:28.760714   30760 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:03:28.760841   30760 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:03:28.760951   30760 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:03:28.847264   30760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:28.862792   30760 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:28.862819   30760 api_server.go:166] Checking apiserver status ...
	I0818 19:03:28.862871   30760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:28.877595   30760 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0818 19:03:28.888619   30760 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:28.888698   30760 ssh_runner.go:195] Run: ls
	I0818 19:03:28.893314   30760 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:28.897765   30760 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:28.897787   30760 status.go:422] ha-189125-m03 apiserver status = Running (err=<nil>)
	I0818 19:03:28.897798   30760 status.go:257] ha-189125-m03 status: &{Name:ha-189125-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:28.897818   30760 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:03:28.898129   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:28.898167   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:28.913281   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0818 19:03:28.913732   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:28.914270   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:28.914293   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:28.914634   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:28.914813   30760 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:03:28.916153   30760 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:03:28.916180   30760 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:28.916454   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:28.916494   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:28.931728   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0818 19:03:28.932166   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:28.932616   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:28.932631   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:28.932917   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:28.933127   30760 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:03:28.935843   30760 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:28.936307   30760 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:28.936332   30760 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:28.936539   30760 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:28.936828   30760 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:28.936859   30760 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:28.951711   30760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I0818 19:03:28.952049   30760 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:28.952521   30760 main.go:141] libmachine: Using API Version  1
	I0818 19:03:28.952539   30760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:28.952860   30760 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:28.953043   30760 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:03:28.953199   30760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:28.953217   30760 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:03:28.956173   30760 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:28.956679   30760 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:28.956702   30760 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:28.956869   30760 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:03:28.957023   30760 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:03:28.957209   30760 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:03:28.957359   30760 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:03:29.038738   30760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:29.053266   30760 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 3 (3.740380163s)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-189125-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:03:31.538731   30862 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:03:31.538877   30862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:31.538889   30862 out.go:358] Setting ErrFile to fd 2...
	I0818 19:03:31.538895   30862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:31.539186   30862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:03:31.539449   30862 out.go:352] Setting JSON to false
	I0818 19:03:31.539486   30862 mustload.go:65] Loading cluster: ha-189125
	I0818 19:03:31.539672   30862 notify.go:220] Checking for updates...
	I0818 19:03:31.540020   30862 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:03:31.540081   30862 status.go:255] checking status of ha-189125 ...
	I0818 19:03:31.540661   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:31.540736   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:31.561117   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0818 19:03:31.561570   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:31.562184   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:31.562211   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:31.562533   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:31.562719   30862 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:03:31.564237   30862 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:03:31.564257   30862 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:31.564660   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:31.564703   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:31.579189   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0818 19:03:31.579578   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:31.580028   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:31.580050   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:31.580361   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:31.580557   30862 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:03:31.583175   30862 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:31.583658   30862 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:31.583683   30862 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:31.583809   30862 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:31.584080   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:31.584112   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:31.598344   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0818 19:03:31.598740   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:31.599162   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:31.599185   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:31.599542   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:31.599694   30862 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:03:31.599864   30862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:31.599891   30862 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:03:31.602423   30862 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:31.602907   30862 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:31.602936   30862 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:31.603045   30862 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:03:31.603238   30862 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:03:31.603405   30862 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:03:31.603549   30862 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:03:31.683088   30862 ssh_runner.go:195] Run: systemctl --version
	I0818 19:03:31.689424   30862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:31.704144   30862 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:31.704173   30862 api_server.go:166] Checking apiserver status ...
	I0818 19:03:31.704217   30862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:31.718471   30862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0818 19:03:31.728465   30862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:31.728543   30862 ssh_runner.go:195] Run: ls
	I0818 19:03:31.733025   30862 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:31.739071   30862 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:31.739094   30862 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:03:31.739106   30862 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:31.739126   30862 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:03:31.739571   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:31.739618   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:31.754114   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0818 19:03:31.754566   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:31.755054   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:31.755079   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:31.755458   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:31.755641   30862 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:03:31.757083   30862 status.go:330] ha-189125-m02 host status = "Running" (err=<nil>)
	I0818 19:03:31.757097   30862 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:31.757368   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:31.757399   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:31.771750   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
	I0818 19:03:31.772130   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:31.772585   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:31.772606   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:31.772885   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:31.773049   30862 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 19:03:31.776075   30862 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:31.776506   30862 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:31.776532   30862 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:31.776704   30862 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:31.777105   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:31.777153   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:31.791303   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0818 19:03:31.791759   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:31.792204   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:31.792239   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:31.792552   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:31.792693   30862 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 19:03:31.792936   30862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:31.792958   30862 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 19:03:31.795778   30862 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:31.796129   30862 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:31.796160   30862 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:31.796271   30862 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 19:03:31.796426   30862 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 19:03:31.796585   30862 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 19:03:31.796708   30862 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	W0818 19:03:34.875671   30862 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:34.875791   30862 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0818 19:03:34.875811   30862 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:34.875821   30862 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 19:03:34.875878   30862 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:34.875953   30862 status.go:255] checking status of ha-189125-m03 ...
	I0818 19:03:34.876446   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:34.876500   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:34.891285   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0818 19:03:34.891715   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:34.892205   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:34.892251   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:34.892565   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:34.892809   30862 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:03:34.894385   30862 status.go:330] ha-189125-m03 host status = "Running" (err=<nil>)
	I0818 19:03:34.894400   30862 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:34.894802   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:34.894844   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:34.909668   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44233
	I0818 19:03:34.910097   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:34.910621   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:34.910642   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:34.910994   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:34.911232   30862 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 19:03:34.914353   30862 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:34.914929   30862 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:34.914956   30862 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:34.915100   30862 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:34.915421   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:34.915459   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:34.930031   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35273
	I0818 19:03:34.930457   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:34.930914   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:34.930944   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:34.931291   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:34.931483   30862 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:03:34.931638   30862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:34.931658   30862 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:03:34.934215   30862 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:34.934608   30862 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:34.934629   30862 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:34.934786   30862 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:03:34.934954   30862 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:03:34.935087   30862 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:03:34.935271   30862 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:03:35.023164   30862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:35.038056   30862 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:35.038083   30862 api_server.go:166] Checking apiserver status ...
	I0818 19:03:35.038115   30862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:35.055188   30862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0818 19:03:35.065433   30862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:35.065482   30862 ssh_runner.go:195] Run: ls
	I0818 19:03:35.069988   30862 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:35.074591   30862 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:35.074610   30862 status.go:422] ha-189125-m03 apiserver status = Running (err=<nil>)
	I0818 19:03:35.074618   30862 status.go:257] ha-189125-m03 status: &{Name:ha-189125-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:35.074633   30862 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:03:35.074945   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:35.074979   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:35.090851   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41077
	I0818 19:03:35.091319   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:35.091798   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:35.091822   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:35.092096   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:35.092261   30862 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:03:35.093891   30862 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:03:35.093905   30862 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:35.094182   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:35.094216   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:35.108939   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38735
	I0818 19:03:35.109406   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:35.109921   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:35.109941   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:35.110264   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:35.110470   30862 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:03:35.113185   30862 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:35.113547   30862 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:35.113590   30862 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:35.113752   30862 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:35.114093   30862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:35.114134   30862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:35.128772   30862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41585
	I0818 19:03:35.129165   30862 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:35.129645   30862 main.go:141] libmachine: Using API Version  1
	I0818 19:03:35.129685   30862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:35.130031   30862 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:35.130210   30862 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:03:35.130411   30862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:35.130439   30862 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:03:35.133177   30862 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:35.133564   30862 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:35.133592   30862 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:35.133666   30862 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:03:35.133826   30862 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:03:35.133944   30862 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:03:35.134072   30862 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:03:35.218521   30862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:35.232016   30862 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 3 (3.717840267s)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-189125-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:03:38.404589   30979 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:03:38.404695   30979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:38.404704   30979 out.go:358] Setting ErrFile to fd 2...
	I0818 19:03:38.404709   30979 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:38.404877   30979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:03:38.405041   30979 out.go:352] Setting JSON to false
	I0818 19:03:38.405066   30979 mustload.go:65] Loading cluster: ha-189125
	I0818 19:03:38.405178   30979 notify.go:220] Checking for updates...
	I0818 19:03:38.405448   30979 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:03:38.405461   30979 status.go:255] checking status of ha-189125 ...
	I0818 19:03:38.405835   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:38.405885   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:38.424461   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0818 19:03:38.424992   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:38.425642   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:38.425671   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:38.426087   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:38.426350   30979 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:03:38.428082   30979 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:03:38.428095   30979 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:38.428366   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:38.428397   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:38.444004   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I0818 19:03:38.444403   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:38.444845   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:38.444885   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:38.445193   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:38.445381   30979 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:03:38.448058   30979 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:38.448518   30979 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:38.448555   30979 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:38.448715   30979 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:38.449053   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:38.449094   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:38.463589   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35067
	I0818 19:03:38.463996   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:38.464434   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:38.464465   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:38.464818   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:38.465012   30979 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:03:38.465183   30979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:38.465214   30979 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:03:38.468252   30979 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:38.468762   30979 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:38.468784   30979 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:38.468897   30979 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:03:38.469059   30979 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:03:38.469199   30979 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:03:38.469358   30979 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:03:38.547175   30979 ssh_runner.go:195] Run: systemctl --version
	I0818 19:03:38.553140   30979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:38.569487   30979 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:38.569514   30979 api_server.go:166] Checking apiserver status ...
	I0818 19:03:38.569558   30979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:38.583713   30979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0818 19:03:38.593533   30979 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:38.593585   30979 ssh_runner.go:195] Run: ls
	I0818 19:03:38.597887   30979 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:38.602003   30979 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:38.602023   30979 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:03:38.602037   30979 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:38.602060   30979 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:03:38.602464   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:38.602521   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:38.616966   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43453
	I0818 19:03:38.617322   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:38.617770   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:38.617788   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:38.618087   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:38.618328   30979 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:03:38.619904   30979 status.go:330] ha-189125-m02 host status = "Running" (err=<nil>)
	I0818 19:03:38.619920   30979 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:38.620209   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:38.620243   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:38.634378   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I0818 19:03:38.634784   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:38.635195   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:38.635218   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:38.635556   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:38.635759   30979 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 19:03:38.638696   30979 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:38.639247   30979 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:38.639268   30979 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:38.639442   30979 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:03:38.639863   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:38.639899   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:38.654421   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0818 19:03:38.654792   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:38.655244   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:38.655266   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:38.655599   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:38.655801   30979 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 19:03:38.655973   30979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:38.655993   30979 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 19:03:38.658562   30979 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:38.658947   30979 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:03:38.658979   30979 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:03:38.659189   30979 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 19:03:38.659360   30979 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 19:03:38.659518   30979 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 19:03:38.659634   30979 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	W0818 19:03:41.723652   30979 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.147:22: connect: no route to host
	W0818 19:03:41.723748   30979 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	E0818 19:03:41.723762   30979 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:41.723773   30979 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0818 19:03:41.723789   30979 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.147:22: connect: no route to host
	I0818 19:03:41.723797   30979 status.go:255] checking status of ha-189125-m03 ...
	I0818 19:03:41.724106   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:41.724148   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:41.739061   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
	I0818 19:03:41.739644   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:41.740214   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:41.740235   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:41.740687   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:41.740904   30979 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:03:41.742922   30979 status.go:330] ha-189125-m03 host status = "Running" (err=<nil>)
	I0818 19:03:41.742940   30979 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:41.743312   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:41.743352   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:41.759016   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45961
	I0818 19:03:41.759449   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:41.759979   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:41.760009   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:41.760417   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:41.760612   30979 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 19:03:41.763577   30979 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:41.764099   30979 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:41.764129   30979 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:41.764258   30979 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:41.764650   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:41.764699   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:41.781280   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41253
	I0818 19:03:41.781631   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:41.782143   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:41.782180   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:41.782499   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:41.782696   30979 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:03:41.782865   30979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:41.782889   30979 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:03:41.785706   30979 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:41.786078   30979 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:41.786097   30979 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:41.786227   30979 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:03:41.786393   30979 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:03:41.786545   30979 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:03:41.786683   30979 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:03:41.870803   30979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:41.885643   30979 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:41.885673   30979 api_server.go:166] Checking apiserver status ...
	I0818 19:03:41.885708   30979 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:41.900225   30979 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0818 19:03:41.910752   30979 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:41.910813   30979 ssh_runner.go:195] Run: ls
	I0818 19:03:41.915311   30979 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:41.920459   30979 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:41.920485   30979 status.go:422] ha-189125-m03 apiserver status = Running (err=<nil>)
	I0818 19:03:41.920494   30979 status.go:257] ha-189125-m03 status: &{Name:ha-189125-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:41.920508   30979 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:03:41.920787   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:41.920831   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:41.935857   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42005
	I0818 19:03:41.936261   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:41.936752   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:41.936775   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:41.937141   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:41.937342   30979 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:03:41.938956   30979 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:03:41.938972   30979 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:41.939319   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:41.939349   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:41.954229   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
	I0818 19:03:41.954730   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:41.955169   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:41.955196   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:41.955506   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:41.955689   30979 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:03:41.958330   30979 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:41.958739   30979 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:41.958783   30979 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:41.958893   30979 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:41.959263   30979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:41.959303   30979 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:41.973617   30979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
	I0818 19:03:41.974037   30979 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:41.974538   30979 main.go:141] libmachine: Using API Version  1
	I0818 19:03:41.974564   30979 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:41.974863   30979 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:41.975054   30979 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:03:41.975265   30979 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:41.975287   30979 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:03:41.977638   30979 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:41.978014   30979 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:41.978059   30979 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:41.978142   30979 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:03:41.978295   30979 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:03:41.978448   30979 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:03:41.978583   30979 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:03:42.066514   30979 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:42.080411   30979 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 7 (618.763063ms)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-189125-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:03:46.184741   31100 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:03:46.184878   31100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:46.184888   31100 out.go:358] Setting ErrFile to fd 2...
	I0818 19:03:46.184892   31100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:03:46.185055   31100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:03:46.185232   31100 out.go:352] Setting JSON to false
	I0818 19:03:46.185266   31100 mustload.go:65] Loading cluster: ha-189125
	I0818 19:03:46.185302   31100 notify.go:220] Checking for updates...
	I0818 19:03:46.185709   31100 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:03:46.185725   31100 status.go:255] checking status of ha-189125 ...
	I0818 19:03:46.186201   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.186239   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.206304   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I0818 19:03:46.206785   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.207367   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.207432   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.207796   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.207963   31100 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:03:46.209898   31100 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:03:46.209916   31100 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:46.210207   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.210242   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.225517   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39253
	I0818 19:03:46.226004   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.226417   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.226442   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.226744   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.226928   31100 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:03:46.230372   31100 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:46.230913   31100 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:46.230934   31100 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:46.231079   31100 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:03:46.231409   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.231453   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.246042   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41251
	I0818 19:03:46.246433   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.246874   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.246893   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.247239   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.247453   31100 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:03:46.247664   31100 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:46.247689   31100 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:03:46.250239   31100 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:46.250616   31100 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:03:46.250648   31100 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:03:46.250751   31100 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:03:46.250928   31100 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:03:46.251056   31100 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:03:46.251178   31100 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:03:46.334369   31100 ssh_runner.go:195] Run: systemctl --version
	I0818 19:03:46.342534   31100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:46.358338   31100 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:46.358377   31100 api_server.go:166] Checking apiserver status ...
	I0818 19:03:46.358417   31100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:46.373929   31100 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0818 19:03:46.383609   31100 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:46.383661   31100 ssh_runner.go:195] Run: ls
	I0818 19:03:46.388086   31100 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:46.394545   31100 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:46.394564   31100 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:03:46.394573   31100 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:46.394597   31100 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:03:46.394907   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.394943   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.409615   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0818 19:03:46.409979   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.410444   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.410462   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.410738   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.410950   31100 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:03:46.412635   31100 status.go:330] ha-189125-m02 host status = "Stopped" (err=<nil>)
	I0818 19:03:46.412650   31100 status.go:343] host is not running, skipping remaining checks
	I0818 19:03:46.412656   31100 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:46.412682   31100 status.go:255] checking status of ha-189125-m03 ...
	I0818 19:03:46.412992   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.413031   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.429810   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39115
	I0818 19:03:46.430252   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.430724   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.430753   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.431089   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.431274   31100 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:03:46.432986   31100 status.go:330] ha-189125-m03 host status = "Running" (err=<nil>)
	I0818 19:03:46.433000   31100 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:46.433353   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.433389   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.448107   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0818 19:03:46.448524   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.449033   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.449058   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.449334   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.449541   31100 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 19:03:46.452178   31100 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:46.452645   31100 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:46.452686   31100 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:46.452765   31100 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:03:46.453058   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.453089   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.467285   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0818 19:03:46.467742   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.468179   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.468204   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.468506   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.468699   31100 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:03:46.468927   31100 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:46.468948   31100 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:03:46.471746   31100 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:46.472173   31100 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:03:46.472204   31100 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:03:46.472299   31100 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:03:46.472473   31100 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:03:46.472619   31100 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:03:46.472782   31100 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:03:46.555403   31100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:46.570151   31100 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:03:46.570175   31100 api_server.go:166] Checking apiserver status ...
	I0818 19:03:46.570206   31100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:03:46.584094   31100 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0818 19:03:46.594428   31100 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:03:46.594482   31100 ssh_runner.go:195] Run: ls
	I0818 19:03:46.598971   31100 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:03:46.603285   31100 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:03:46.603306   31100 status.go:422] ha-189125-m03 apiserver status = Running (err=<nil>)
	I0818 19:03:46.603313   31100 status.go:257] ha-189125-m03 status: &{Name:ha-189125-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:03:46.603326   31100 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:03:46.603641   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.603681   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.618179   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45635
	I0818 19:03:46.618578   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.619040   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.619061   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.619358   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.619562   31100 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:03:46.621193   31100 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:03:46.621211   31100 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:46.621523   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.621565   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.635711   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43019
	I0818 19:03:46.636138   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.636587   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.636606   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.636935   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.637122   31100 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:03:46.640169   31100 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:46.640651   31100 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:46.640688   31100 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:46.640815   31100 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:03:46.641219   31100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:03:46.641262   31100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:03:46.656050   31100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40223
	I0818 19:03:46.656498   31100 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:03:46.656937   31100 main.go:141] libmachine: Using API Version  1
	I0818 19:03:46.656956   31100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:03:46.657300   31100 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:03:46.657524   31100 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:03:46.657874   31100 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:03:46.657898   31100 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:03:46.660866   31100 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:46.661372   31100 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:03:46.661406   31100 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:03:46.661572   31100 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:03:46.661730   31100 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:03:46.661889   31100 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:03:46.662023   31100 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:03:46.747041   31100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:03:46.761562   31100 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 7 (620.087166ms)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-189125-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:04:00.274179   31220 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:04:00.274300   31220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:04:00.274311   31220 out.go:358] Setting ErrFile to fd 2...
	I0818 19:04:00.274317   31220 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:04:00.274523   31220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:04:00.274696   31220 out.go:352] Setting JSON to false
	I0818 19:04:00.274725   31220 mustload.go:65] Loading cluster: ha-189125
	I0818 19:04:00.274824   31220 notify.go:220] Checking for updates...
	I0818 19:04:00.275231   31220 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:04:00.275248   31220 status.go:255] checking status of ha-189125 ...
	I0818 19:04:00.275800   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.275896   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.293752   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0818 19:04:00.294147   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.294712   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.294738   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.295047   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.295247   31220 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:04:00.296728   31220 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:04:00.296740   31220 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:04:00.297055   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.297096   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.311565   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38359
	I0818 19:04:00.311936   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.312446   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.312471   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.312850   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.313115   31220 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:04:00.315915   31220 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:04:00.316346   31220 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:04:00.316380   31220 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:04:00.316561   31220 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:04:00.316843   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.316878   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.331465   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0818 19:04:00.331939   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.332390   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.332412   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.332780   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.332931   31220 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:04:00.333137   31220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:04:00.333174   31220 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:04:00.336160   31220 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:04:00.336634   31220 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:04:00.336662   31220 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:04:00.336798   31220 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:04:00.336974   31220 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:04:00.337165   31220 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:04:00.337314   31220 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:04:00.415450   31220 ssh_runner.go:195] Run: systemctl --version
	I0818 19:04:00.423202   31220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:04:00.439259   31220 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:04:00.439293   31220 api_server.go:166] Checking apiserver status ...
	I0818 19:04:00.439341   31220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:04:00.455020   31220 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	W0818 19:04:00.465047   31220 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:04:00.465108   31220 ssh_runner.go:195] Run: ls
	I0818 19:04:00.469276   31220 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:04:00.475605   31220 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:04:00.475630   31220 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:04:00.475644   31220 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:04:00.475675   31220 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:04:00.475990   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.476023   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.490389   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42845
	I0818 19:04:00.490723   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.491197   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.491216   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.491577   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.491759   31220 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:04:00.493329   31220 status.go:330] ha-189125-m02 host status = "Stopped" (err=<nil>)
	I0818 19:04:00.493346   31220 status.go:343] host is not running, skipping remaining checks
	I0818 19:04:00.493354   31220 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:04:00.493387   31220 status.go:255] checking status of ha-189125-m03 ...
	I0818 19:04:00.493733   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.493783   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.508040   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45037
	I0818 19:04:00.508427   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.508882   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.508901   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.509204   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.509428   31220 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:04:00.511074   31220 status.go:330] ha-189125-m03 host status = "Running" (err=<nil>)
	I0818 19:04:00.511099   31220 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:04:00.511396   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.511448   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.525692   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42451
	I0818 19:04:00.526083   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.526612   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.526632   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.526975   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.527147   31220 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 19:04:00.530271   31220 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:04:00.530733   31220 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:04:00.530758   31220 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:04:00.530881   31220 host.go:66] Checking if "ha-189125-m03" exists ...
	I0818 19:04:00.531194   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.531227   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.546170   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40885
	I0818 19:04:00.546533   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.546955   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.546977   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.547330   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.547512   31220 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:04:00.547707   31220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:04:00.547726   31220 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:04:00.550289   31220 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:04:00.550633   31220 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:04:00.550656   31220 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:04:00.550765   31220 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:04:00.550917   31220 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:04:00.551035   31220 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:04:00.551128   31220 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:04:00.634992   31220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:04:00.651242   31220 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:04:00.651267   31220 api_server.go:166] Checking apiserver status ...
	I0818 19:04:00.651301   31220 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:04:00.668453   31220 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	W0818 19:04:00.679210   31220 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:04:00.679280   31220 ssh_runner.go:195] Run: ls
	I0818 19:04:00.683831   31220 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:04:00.688216   31220 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:04:00.688241   31220 status.go:422] ha-189125-m03 apiserver status = Running (err=<nil>)
	I0818 19:04:00.688252   31220 status.go:257] ha-189125-m03 status: &{Name:ha-189125-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:04:00.688271   31220 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:04:00.688588   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.688654   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.703423   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40811
	I0818 19:04:00.703825   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.704286   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.704305   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.704592   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.704799   31220 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:04:00.706503   31220 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:04:00.706516   31220 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:04:00.706783   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.706817   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.720829   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41961
	I0818 19:04:00.721172   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.721577   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.721599   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.721889   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.722073   31220 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:04:00.724778   31220 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:04:00.725156   31220 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:04:00.725188   31220 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:04:00.725325   31220 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:04:00.725762   31220 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:00.725807   31220 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:00.740264   31220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43067
	I0818 19:04:00.740667   31220 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:00.741110   31220 main.go:141] libmachine: Using API Version  1
	I0818 19:04:00.741132   31220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:00.741423   31220 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:00.741584   31220 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:04:00.741772   31220 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:04:00.741794   31220 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:04:00.744661   31220 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:04:00.745046   31220 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:04:00.745067   31220 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:04:00.745233   31220 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:04:00.745392   31220 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:04:00.745522   31220 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:04:00.745635   31220 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:04:00.835795   31220 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:04:00.851613   31220 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-189125 -n ha-189125
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-189125 logs -n 25: (1.398005925s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125:/home/docker/cp-test_ha-189125-m03_ha-189125.txt                       |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125 sudo cat                                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125.txt                                 |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m02:/home/docker/cp-test_ha-189125-m03_ha-189125-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m02 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04:/home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m04 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp testdata/cp-test.txt                                                | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3256308944/001/cp-test_ha-189125-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125:/home/docker/cp-test_ha-189125-m04_ha-189125.txt                       |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125 sudo cat                                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125.txt                                 |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m02:/home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m02 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03:/home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m03 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-189125 node stop m02 -v=7                                                     | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-189125 node start m02 -v=7                                                    | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:55:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:55:16.832717   25471 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:55:16.832945   25471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:55:16.832952   25471 out.go:358] Setting ErrFile to fd 2...
	I0818 18:55:16.832957   25471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:55:16.833133   25471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 18:55:16.833656   25471 out.go:352] Setting JSON to false
	I0818 18:55:16.834453   25471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2261,"bootTime":1724005056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:55:16.834502   25471 start.go:139] virtualization: kvm guest
	I0818 18:55:16.836466   25471 out.go:177] * [ha-189125] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 18:55:16.837827   25471 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:55:16.837833   25471 notify.go:220] Checking for updates...
	I0818 18:55:16.840203   25471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:55:16.841388   25471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:55:16.842493   25471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:55:16.843652   25471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 18:55:16.844817   25471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:55:16.846129   25471 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:55:16.880645   25471 out.go:177] * Using the kvm2 driver based on user configuration
	I0818 18:55:16.881721   25471 start.go:297] selected driver: kvm2
	I0818 18:55:16.881739   25471 start.go:901] validating driver "kvm2" against <nil>
	I0818 18:55:16.881750   25471 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:55:16.882417   25471 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:55:16.882488   25471 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 18:55:16.897244   25471 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 18:55:16.897295   25471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:55:16.897485   25471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:55:16.897557   25471 cni.go:84] Creating CNI manager for ""
	I0818 18:55:16.897568   25471 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0818 18:55:16.897573   25471 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0818 18:55:16.897619   25471 start.go:340] cluster config:
	{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0818 18:55:16.897705   25471 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:55:16.899614   25471 out.go:177] * Starting "ha-189125" primary control-plane node in "ha-189125" cluster
	I0818 18:55:16.900764   25471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:55:16.900805   25471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 18:55:16.900830   25471 cache.go:56] Caching tarball of preloaded images
	I0818 18:55:16.900937   25471 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 18:55:16.900948   25471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 18:55:16.901329   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:55:16.901358   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json: {Name:mk37ad2e33452381b7bc2ec4f6729509252ed83d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:16.901517   25471 start.go:360] acquireMachinesLock for ha-189125: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 18:55:16.901556   25471 start.go:364] duration metric: took 20.868µs to acquireMachinesLock for "ha-189125"
	I0818 18:55:16.901574   25471 start.go:93] Provisioning new machine with config: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:55:16.901634   25471 start.go:125] createHost starting for "" (driver="kvm2")
	I0818 18:55:16.903091   25471 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 18:55:16.903200   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:55:16.903232   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:55:16.917286   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34833
	I0818 18:55:16.917669   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:55:16.918149   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:55:16.918169   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:55:16.918479   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:55:16.918662   25471 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 18:55:16.918795   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:16.918981   25471 start.go:159] libmachine.API.Create for "ha-189125" (driver="kvm2")
	I0818 18:55:16.919010   25471 client.go:168] LocalClient.Create starting
	I0818 18:55:16.919035   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 18:55:16.919068   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:55:16.919086   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:55:16.919145   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 18:55:16.919164   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:55:16.919178   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:55:16.919193   25471 main.go:141] libmachine: Running pre-create checks...
	I0818 18:55:16.919200   25471 main.go:141] libmachine: (ha-189125) Calling .PreCreateCheck
	I0818 18:55:16.919587   25471 main.go:141] libmachine: (ha-189125) Calling .GetConfigRaw
	I0818 18:55:16.919935   25471 main.go:141] libmachine: Creating machine...
	I0818 18:55:16.919947   25471 main.go:141] libmachine: (ha-189125) Calling .Create
	I0818 18:55:16.920053   25471 main.go:141] libmachine: (ha-189125) Creating KVM machine...
	I0818 18:55:16.921268   25471 main.go:141] libmachine: (ha-189125) DBG | found existing default KVM network
	I0818 18:55:16.921919   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:16.921778   25494 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0818 18:55:16.921937   25471 main.go:141] libmachine: (ha-189125) DBG | created network xml: 
	I0818 18:55:16.921950   25471 main.go:141] libmachine: (ha-189125) DBG | <network>
	I0818 18:55:16.921962   25471 main.go:141] libmachine: (ha-189125) DBG |   <name>mk-ha-189125</name>
	I0818 18:55:16.921976   25471 main.go:141] libmachine: (ha-189125) DBG |   <dns enable='no'/>
	I0818 18:55:16.921982   25471 main.go:141] libmachine: (ha-189125) DBG |   
	I0818 18:55:16.922010   25471 main.go:141] libmachine: (ha-189125) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0818 18:55:16.922031   25471 main.go:141] libmachine: (ha-189125) DBG |     <dhcp>
	I0818 18:55:16.922057   25471 main.go:141] libmachine: (ha-189125) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0818 18:55:16.922068   25471 main.go:141] libmachine: (ha-189125) DBG |     </dhcp>
	I0818 18:55:16.922078   25471 main.go:141] libmachine: (ha-189125) DBG |   </ip>
	I0818 18:55:16.922085   25471 main.go:141] libmachine: (ha-189125) DBG |   
	I0818 18:55:16.922097   25471 main.go:141] libmachine: (ha-189125) DBG | </network>
	I0818 18:55:16.922110   25471 main.go:141] libmachine: (ha-189125) DBG | 
	I0818 18:55:16.927287   25471 main.go:141] libmachine: (ha-189125) DBG | trying to create private KVM network mk-ha-189125 192.168.39.0/24...
	I0818 18:55:16.988469   25471 main.go:141] libmachine: (ha-189125) DBG | private KVM network mk-ha-189125 192.168.39.0/24 created
	I0818 18:55:16.988518   25471 main.go:141] libmachine: (ha-189125) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125 ...
	I0818 18:55:16.988537   25471 main.go:141] libmachine: (ha-189125) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:55:16.988550   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:16.988436   25494 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:55:16.988631   25471 main.go:141] libmachine: (ha-189125) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 18:55:17.226147   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:17.226036   25494 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa...
	I0818 18:55:17.511195   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:17.511048   25494 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/ha-189125.rawdisk...
	I0818 18:55:17.511222   25471 main.go:141] libmachine: (ha-189125) DBG | Writing magic tar header
	I0818 18:55:17.511232   25471 main.go:141] libmachine: (ha-189125) DBG | Writing SSH key tar header
	I0818 18:55:17.511240   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:17.511170   25494 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125 ...
	I0818 18:55:17.511305   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125
	I0818 18:55:17.511333   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125 (perms=drwx------)
	I0818 18:55:17.511358   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 18:55:17.511372   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 18:55:17.511412   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 18:55:17.511432   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:55:17.511464   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 18:55:17.511480   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 18:55:17.511489   25471 main.go:141] libmachine: (ha-189125) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 18:55:17.511502   25471 main.go:141] libmachine: (ha-189125) Creating domain...
	I0818 18:55:17.511522   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 18:55:17.511535   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 18:55:17.511541   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home/jenkins
	I0818 18:55:17.511549   25471 main.go:141] libmachine: (ha-189125) DBG | Checking permissions on dir: /home
	I0818 18:55:17.511556   25471 main.go:141] libmachine: (ha-189125) DBG | Skipping /home - not owner
	I0818 18:55:17.512592   25471 main.go:141] libmachine: (ha-189125) define libvirt domain using xml: 
	I0818 18:55:17.512616   25471 main.go:141] libmachine: (ha-189125) <domain type='kvm'>
	I0818 18:55:17.512626   25471 main.go:141] libmachine: (ha-189125)   <name>ha-189125</name>
	I0818 18:55:17.512638   25471 main.go:141] libmachine: (ha-189125)   <memory unit='MiB'>2200</memory>
	I0818 18:55:17.512650   25471 main.go:141] libmachine: (ha-189125)   <vcpu>2</vcpu>
	I0818 18:55:17.512660   25471 main.go:141] libmachine: (ha-189125)   <features>
	I0818 18:55:17.512668   25471 main.go:141] libmachine: (ha-189125)     <acpi/>
	I0818 18:55:17.512678   25471 main.go:141] libmachine: (ha-189125)     <apic/>
	I0818 18:55:17.512686   25471 main.go:141] libmachine: (ha-189125)     <pae/>
	I0818 18:55:17.512705   25471 main.go:141] libmachine: (ha-189125)     
	I0818 18:55:17.512726   25471 main.go:141] libmachine: (ha-189125)   </features>
	I0818 18:55:17.512740   25471 main.go:141] libmachine: (ha-189125)   <cpu mode='host-passthrough'>
	I0818 18:55:17.512746   25471 main.go:141] libmachine: (ha-189125)   
	I0818 18:55:17.512755   25471 main.go:141] libmachine: (ha-189125)   </cpu>
	I0818 18:55:17.512763   25471 main.go:141] libmachine: (ha-189125)   <os>
	I0818 18:55:17.512774   25471 main.go:141] libmachine: (ha-189125)     <type>hvm</type>
	I0818 18:55:17.512785   25471 main.go:141] libmachine: (ha-189125)     <boot dev='cdrom'/>
	I0818 18:55:17.512792   25471 main.go:141] libmachine: (ha-189125)     <boot dev='hd'/>
	I0818 18:55:17.512798   25471 main.go:141] libmachine: (ha-189125)     <bootmenu enable='no'/>
	I0818 18:55:17.512804   25471 main.go:141] libmachine: (ha-189125)   </os>
	I0818 18:55:17.512809   25471 main.go:141] libmachine: (ha-189125)   <devices>
	I0818 18:55:17.512819   25471 main.go:141] libmachine: (ha-189125)     <disk type='file' device='cdrom'>
	I0818 18:55:17.512846   25471 main.go:141] libmachine: (ha-189125)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/boot2docker.iso'/>
	I0818 18:55:17.512868   25471 main.go:141] libmachine: (ha-189125)       <target dev='hdc' bus='scsi'/>
	I0818 18:55:17.512879   25471 main.go:141] libmachine: (ha-189125)       <readonly/>
	I0818 18:55:17.512883   25471 main.go:141] libmachine: (ha-189125)     </disk>
	I0818 18:55:17.512892   25471 main.go:141] libmachine: (ha-189125)     <disk type='file' device='disk'>
	I0818 18:55:17.512900   25471 main.go:141] libmachine: (ha-189125)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 18:55:17.512917   25471 main.go:141] libmachine: (ha-189125)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/ha-189125.rawdisk'/>
	I0818 18:55:17.512925   25471 main.go:141] libmachine: (ha-189125)       <target dev='hda' bus='virtio'/>
	I0818 18:55:17.512931   25471 main.go:141] libmachine: (ha-189125)     </disk>
	I0818 18:55:17.512945   25471 main.go:141] libmachine: (ha-189125)     <interface type='network'>
	I0818 18:55:17.512958   25471 main.go:141] libmachine: (ha-189125)       <source network='mk-ha-189125'/>
	I0818 18:55:17.512972   25471 main.go:141] libmachine: (ha-189125)       <model type='virtio'/>
	I0818 18:55:17.512980   25471 main.go:141] libmachine: (ha-189125)     </interface>
	I0818 18:55:17.512985   25471 main.go:141] libmachine: (ha-189125)     <interface type='network'>
	I0818 18:55:17.512990   25471 main.go:141] libmachine: (ha-189125)       <source network='default'/>
	I0818 18:55:17.512994   25471 main.go:141] libmachine: (ha-189125)       <model type='virtio'/>
	I0818 18:55:17.512999   25471 main.go:141] libmachine: (ha-189125)     </interface>
	I0818 18:55:17.513003   25471 main.go:141] libmachine: (ha-189125)     <serial type='pty'>
	I0818 18:55:17.513008   25471 main.go:141] libmachine: (ha-189125)       <target port='0'/>
	I0818 18:55:17.513012   25471 main.go:141] libmachine: (ha-189125)     </serial>
	I0818 18:55:17.513017   25471 main.go:141] libmachine: (ha-189125)     <console type='pty'>
	I0818 18:55:17.513023   25471 main.go:141] libmachine: (ha-189125)       <target type='serial' port='0'/>
	I0818 18:55:17.513031   25471 main.go:141] libmachine: (ha-189125)     </console>
	I0818 18:55:17.513042   25471 main.go:141] libmachine: (ha-189125)     <rng model='virtio'>
	I0818 18:55:17.513052   25471 main.go:141] libmachine: (ha-189125)       <backend model='random'>/dev/random</backend>
	I0818 18:55:17.513059   25471 main.go:141] libmachine: (ha-189125)     </rng>
	I0818 18:55:17.513066   25471 main.go:141] libmachine: (ha-189125)     
	I0818 18:55:17.513071   25471 main.go:141] libmachine: (ha-189125)     
	I0818 18:55:17.513075   25471 main.go:141] libmachine: (ha-189125)   </devices>
	I0818 18:55:17.513079   25471 main.go:141] libmachine: (ha-189125) </domain>
	I0818 18:55:17.513086   25471 main.go:141] libmachine: (ha-189125) 
	I0818 18:55:17.516836   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:be:c8:bc in network default
	I0818 18:55:17.517392   25471 main.go:141] libmachine: (ha-189125) Ensuring networks are active...
	I0818 18:55:17.517417   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:17.517999   25471 main.go:141] libmachine: (ha-189125) Ensuring network default is active
	I0818 18:55:17.518309   25471 main.go:141] libmachine: (ha-189125) Ensuring network mk-ha-189125 is active
	I0818 18:55:17.518725   25471 main.go:141] libmachine: (ha-189125) Getting domain xml...
	I0818 18:55:17.519345   25471 main.go:141] libmachine: (ha-189125) Creating domain...
	I0818 18:55:18.708441   25471 main.go:141] libmachine: (ha-189125) Waiting to get IP...
	I0818 18:55:18.709297   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:18.709695   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:18.709727   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:18.709674   25494 retry.go:31] will retry after 206.092137ms: waiting for machine to come up
	I0818 18:55:18.916995   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:18.917414   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:18.917448   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:18.917370   25494 retry.go:31] will retry after 385.757474ms: waiting for machine to come up
	I0818 18:55:19.304852   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:19.305282   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:19.305310   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:19.305235   25494 retry.go:31] will retry after 462.930892ms: waiting for machine to come up
	I0818 18:55:19.769936   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:19.770312   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:19.770334   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:19.770283   25494 retry.go:31] will retry after 474.206876ms: waiting for machine to come up
	I0818 18:55:20.246010   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:20.246434   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:20.246462   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:20.246383   25494 retry.go:31] will retry after 554.966147ms: waiting for machine to come up
	I0818 18:55:20.803186   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:20.803667   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:20.803702   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:20.803601   25494 retry.go:31] will retry after 691.96919ms: waiting for machine to come up
	I0818 18:55:21.497609   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:21.498099   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:21.498130   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:21.498068   25494 retry.go:31] will retry after 1.121268882s: waiting for machine to come up
	I0818 18:55:22.620829   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:22.621298   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:22.621324   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:22.621247   25494 retry.go:31] will retry after 1.211418408s: waiting for machine to come up
	I0818 18:55:23.834734   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:23.835096   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:23.835133   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:23.835054   25494 retry.go:31] will retry after 1.210290747s: waiting for machine to come up
	I0818 18:55:25.047326   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:25.047678   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:25.047707   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:25.047626   25494 retry.go:31] will retry after 2.136992489s: waiting for machine to come up
	I0818 18:55:27.185755   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:27.186178   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:27.186204   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:27.186110   25494 retry.go:31] will retry after 2.212172863s: waiting for machine to come up
	I0818 18:55:29.399454   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:29.399875   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:29.399912   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:29.399826   25494 retry.go:31] will retry after 2.265404223s: waiting for machine to come up
	I0818 18:55:31.666568   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:31.666935   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:31.666964   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:31.666892   25494 retry.go:31] will retry after 4.302632484s: waiting for machine to come up
	I0818 18:55:35.973932   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:35.974308   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find current IP address of domain ha-189125 in network mk-ha-189125
	I0818 18:55:35.974333   25471 main.go:141] libmachine: (ha-189125) DBG | I0818 18:55:35.974266   25494 retry.go:31] will retry after 3.43667283s: waiting for machine to come up
	I0818 18:55:39.412726   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.413154   25471 main.go:141] libmachine: (ha-189125) Found IP for machine: 192.168.39.49
	I0818 18:55:39.413170   25471 main.go:141] libmachine: (ha-189125) Reserving static IP address...
	I0818 18:55:39.413182   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has current primary IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.413644   25471 main.go:141] libmachine: (ha-189125) DBG | unable to find host DHCP lease matching {name: "ha-189125", mac: "52:54:00:e9:51:81", ip: "192.168.39.49"} in network mk-ha-189125
	I0818 18:55:39.481998   25471 main.go:141] libmachine: (ha-189125) DBG | Getting to WaitForSSH function...
	I0818 18:55:39.482030   25471 main.go:141] libmachine: (ha-189125) Reserved static IP address: 192.168.39.49
	I0818 18:55:39.482048   25471 main.go:141] libmachine: (ha-189125) Waiting for SSH to be available...
	I0818 18:55:39.484453   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.484849   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.484872   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.485012   25471 main.go:141] libmachine: (ha-189125) DBG | Using SSH client type: external
	I0818 18:55:39.485033   25471 main.go:141] libmachine: (ha-189125) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa (-rw-------)
	I0818 18:55:39.485151   25471 main.go:141] libmachine: (ha-189125) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.49 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 18:55:39.485169   25471 main.go:141] libmachine: (ha-189125) DBG | About to run SSH command:
	I0818 18:55:39.485188   25471 main.go:141] libmachine: (ha-189125) DBG | exit 0
	I0818 18:55:39.607190   25471 main.go:141] libmachine: (ha-189125) DBG | SSH cmd err, output: <nil>: 
	I0818 18:55:39.607480   25471 main.go:141] libmachine: (ha-189125) KVM machine creation complete!
	I0818 18:55:39.607826   25471 main.go:141] libmachine: (ha-189125) Calling .GetConfigRaw
	I0818 18:55:39.608369   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:39.608527   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:39.608663   25471 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 18:55:39.608680   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:55:39.609760   25471 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 18:55:39.609773   25471 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 18:55:39.609778   25471 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 18:55:39.609783   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:39.612219   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.612570   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.612596   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.612715   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:39.612889   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.613042   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.613175   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:39.613338   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:39.613570   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:39.613586   25471 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 18:55:39.710361   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:55:39.710385   25471 main.go:141] libmachine: Detecting the provisioner...
	I0818 18:55:39.710396   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:39.713049   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.713345   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.713368   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.713532   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:39.713705   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.713861   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.713980   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:39.714219   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:39.714463   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:39.714478   25471 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 18:55:39.811866   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 18:55:39.811938   25471 main.go:141] libmachine: found compatible host: buildroot
	I0818 18:55:39.811948   25471 main.go:141] libmachine: Provisioning with buildroot...
	I0818 18:55:39.811955   25471 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 18:55:39.812198   25471 buildroot.go:166] provisioning hostname "ha-189125"
	I0818 18:55:39.812220   25471 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 18:55:39.812401   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:39.814672   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.814994   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.815021   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.815148   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:39.815329   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.815496   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.815623   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:39.815770   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:39.815955   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:39.815973   25471 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-189125 && echo "ha-189125" | sudo tee /etc/hostname
	I0818 18:55:39.929682   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125
	
	I0818 18:55:39.929712   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:39.932326   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.932689   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:39.932711   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:39.932837   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:39.933010   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.933143   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:39.933248   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:39.933393   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:39.933569   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:39.933590   25471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-189125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-189125/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-189125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:55:40.040891   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:55:40.040919   25471 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 18:55:40.040975   25471 buildroot.go:174] setting up certificates
	I0818 18:55:40.040991   25471 provision.go:84] configureAuth start
	I0818 18:55:40.041007   25471 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 18:55:40.041264   25471 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 18:55:40.044223   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.044514   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.044537   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.044671   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.046879   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.047190   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.047224   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.047362   25471 provision.go:143] copyHostCerts
	I0818 18:55:40.047405   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:55:40.047449   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 18:55:40.047466   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:55:40.047547   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 18:55:40.047671   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:55:40.047700   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 18:55:40.047714   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:55:40.047755   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 18:55:40.047834   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:55:40.047857   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 18:55:40.047867   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:55:40.047905   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 18:55:40.047985   25471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.ha-189125 san=[127.0.0.1 192.168.39.49 ha-189125 localhost minikube]
	I0818 18:55:40.137859   25471 provision.go:177] copyRemoteCerts
	I0818 18:55:40.137907   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:55:40.137937   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.140484   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.140822   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.140846   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.141020   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.141217   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.141356   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.141490   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:55:40.221683   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 18:55:40.221748   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 18:55:40.246144   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 18:55:40.246221   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0818 18:55:40.270891   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 18:55:40.270950   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 18:55:40.294407   25471 provision.go:87] duration metric: took 253.403083ms to configureAuth
	I0818 18:55:40.294429   25471 buildroot.go:189] setting minikube options for container-runtime
	I0818 18:55:40.294570   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:55:40.294631   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.297201   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.297647   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.297683   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.297866   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.298046   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.298204   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.298385   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.298535   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:40.298693   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:40.298714   25471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 18:55:40.552081   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 18:55:40.552127   25471 main.go:141] libmachine: Checking connection to Docker...
	I0818 18:55:40.552134   25471 main.go:141] libmachine: (ha-189125) Calling .GetURL
	I0818 18:55:40.553429   25471 main.go:141] libmachine: (ha-189125) DBG | Using libvirt version 6000000
	I0818 18:55:40.555606   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.555907   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.555930   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.556075   25471 main.go:141] libmachine: Docker is up and running!
	I0818 18:55:40.556091   25471 main.go:141] libmachine: Reticulating splines...
	I0818 18:55:40.556099   25471 client.go:171] duration metric: took 23.637082284s to LocalClient.Create
	I0818 18:55:40.556123   25471 start.go:167] duration metric: took 23.637142268s to libmachine.API.Create "ha-189125"
	I0818 18:55:40.556130   25471 start.go:293] postStartSetup for "ha-189125" (driver="kvm2")
	I0818 18:55:40.556140   25471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:55:40.556164   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.556362   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:55:40.556384   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.558396   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.558652   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.558676   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.558751   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.558911   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.559052   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.559167   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:55:40.637386   25471 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:55:40.642028   25471 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 18:55:40.642047   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 18:55:40.642111   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 18:55:40.642192   25471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 18:55:40.642205   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 18:55:40.642323   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 18:55:40.651801   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:55:40.678851   25471 start.go:296] duration metric: took 122.709599ms for postStartSetup
	I0818 18:55:40.678900   25471 main.go:141] libmachine: (ha-189125) Calling .GetConfigRaw
	I0818 18:55:40.679466   25471 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 18:55:40.681984   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.682315   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.682362   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.682583   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:55:40.682768   25471 start.go:128] duration metric: took 23.781124031s to createHost
	I0818 18:55:40.682793   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.684715   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.684964   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.684991   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.685094   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.685280   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.685436   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.685582   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.685742   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:55:40.685898   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 18:55:40.685910   25471 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 18:55:40.784180   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007340.760671490
	
	I0818 18:55:40.784203   25471 fix.go:216] guest clock: 1724007340.760671490
	I0818 18:55:40.784213   25471 fix.go:229] Guest: 2024-08-18 18:55:40.76067149 +0000 UTC Remote: 2024-08-18 18:55:40.682779935 +0000 UTC m=+23.887777007 (delta=77.891555ms)
	I0818 18:55:40.784237   25471 fix.go:200] guest clock delta is within tolerance: 77.891555ms
	I0818 18:55:40.784243   25471 start.go:83] releasing machines lock for "ha-189125", held for 23.882677576s
	I0818 18:55:40.784261   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.784488   25471 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 18:55:40.786870   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.787148   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.787181   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.787307   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.787790   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.787958   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:55:40.788045   25471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:55:40.788083   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.788208   25471 ssh_runner.go:195] Run: cat /version.json
	I0818 18:55:40.788233   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:55:40.790599   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.790807   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.790879   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.790909   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.791036   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.791181   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:40.791195   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:40.791197   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.791334   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.791407   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:55:40.791548   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:55:40.791544   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:55:40.791656   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:55:40.791776   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:55:40.864592   25471 ssh_runner.go:195] Run: systemctl --version
	I0818 18:55:40.890693   25471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 18:55:41.052400   25471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 18:55:41.058445   25471 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 18:55:41.058527   25471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:55:41.074831   25471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 18:55:41.074857   25471 start.go:495] detecting cgroup driver to use...
	I0818 18:55:41.074927   25471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 18:55:41.091671   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:55:41.108653   25471 docker.go:217] disabling cri-docker service (if available) ...
	I0818 18:55:41.108714   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 18:55:41.122060   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 18:55:41.135284   25471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 18:55:41.251804   25471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 18:55:41.416163   25471 docker.go:233] disabling docker service ...
	I0818 18:55:41.416252   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 18:55:41.430940   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 18:55:41.443776   25471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 18:55:41.565375   25471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 18:55:41.695008   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 18:55:41.708805   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:55:41.726948   25471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 18:55:41.727005   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.736547   25471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 18:55:41.736622   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.746391   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.755878   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.765834   25471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:55:41.775713   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.785050   25471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.801478   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:55:41.810894   25471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:55:41.819551   25471 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 18:55:41.819604   25471 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 18:55:41.831737   25471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:55:41.842090   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:55:41.966114   25471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 18:55:42.104549   25471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 18:55:42.104617   25471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 18:55:42.109616   25471 start.go:563] Will wait 60s for crictl version
	I0818 18:55:42.109673   25471 ssh_runner.go:195] Run: which crictl
	I0818 18:55:42.113520   25471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:55:42.153776   25471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 18:55:42.153850   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:55:42.181340   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:55:42.211132   25471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 18:55:42.212527   25471 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 18:55:42.215214   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:42.215615   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:55:42.215644   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:55:42.215829   25471 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 18:55:42.220002   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:55:42.232820   25471 kubeadm.go:883] updating cluster {Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 18:55:42.232909   25471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:55:42.232951   25471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:55:42.265128   25471 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 18:55:42.265194   25471 ssh_runner.go:195] Run: which lz4
	I0818 18:55:42.269025   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0818 18:55:42.269130   25471 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 18:55:42.273218   25471 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 18:55:42.273249   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 18:55:43.595544   25471 crio.go:462] duration metric: took 1.326438024s to copy over tarball
	I0818 18:55:43.595612   25471 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 18:55:45.624453   25471 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.028819366s)
	I0818 18:55:45.624479   25471 crio.go:469] duration metric: took 2.028909373s to extract the tarball
	I0818 18:55:45.624486   25471 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 18:55:45.661892   25471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:55:45.704692   25471 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 18:55:45.704716   25471 cache_images.go:84] Images are preloaded, skipping loading
	I0818 18:55:45.704725   25471 kubeadm.go:934] updating node { 192.168.39.49 8443 v1.31.0 crio true true} ...
	I0818 18:55:45.704841   25471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-189125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:55:45.704904   25471 ssh_runner.go:195] Run: crio config
	I0818 18:55:45.753433   25471 cni.go:84] Creating CNI manager for ""
	I0818 18:55:45.753451   25471 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0818 18:55:45.753460   25471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 18:55:45.753482   25471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.49 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-189125 NodeName:ha-189125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 18:55:45.753619   25471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-189125"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 18:55:45.753640   25471 kube-vip.go:115] generating kube-vip config ...
	I0818 18:55:45.753680   25471 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 18:55:45.769318   25471 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 18:55:45.769457   25471 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0818 18:55:45.769529   25471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:55:45.779319   25471 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 18:55:45.779409   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 18:55:45.789058   25471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0818 18:55:45.806318   25471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:55:45.823264   25471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0818 18:55:45.840624   25471 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0818 18:55:45.857213   25471 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0818 18:55:45.861395   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:55:45.873798   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:55:45.991237   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:55:46.008028   25471 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125 for IP: 192.168.39.49
	I0818 18:55:46.008055   25471 certs.go:194] generating shared ca certs ...
	I0818 18:55:46.008074   25471 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.008264   25471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 18:55:46.008325   25471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 18:55:46.008335   25471 certs.go:256] generating profile certs ...
	I0818 18:55:46.008421   25471 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key
	I0818 18:55:46.008438   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt with IP's: []
	I0818 18:55:46.215007   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt ...
	I0818 18:55:46.215035   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt: {Name:mk60b149cc8b4a83d937fcffc9f8b33d5653340f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.215197   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key ...
	I0818 18:55:46.215208   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key: {Name:mke859b45cac026e257f0afd9ac7d88fa3a8c8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.215287   25471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.8e559455
	I0818 18:55:46.215302   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.8e559455 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.49 192.168.39.254]
	I0818 18:55:46.290985   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.8e559455 ...
	I0818 18:55:46.291013   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.8e559455: {Name:mke54735a227e9f631f593460c369a782702e610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.291156   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.8e559455 ...
	I0818 18:55:46.291175   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.8e559455: {Name:mk0a6642fa814770fc81f492baeea14c00651aad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.291245   25471 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.8e559455 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt
	I0818 18:55:46.291323   25471 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.8e559455 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key
	I0818 18:55:46.291397   25471 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key
	I0818 18:55:46.291417   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt with IP's: []
	I0818 18:55:46.434855   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt ...
	I0818 18:55:46.434883   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt: {Name:mkf6e55369f3d420e87f16cc023d112c682ebc1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.435029   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key ...
	I0818 18:55:46.435041   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key: {Name:mk26df8f001944899b15a3c943b0263d2ac4c738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:55:46.435114   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 18:55:46.435135   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 18:55:46.435148   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 18:55:46.435162   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 18:55:46.435177   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 18:55:46.435190   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 18:55:46.435203   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 18:55:46.435217   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 18:55:46.435264   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 18:55:46.435296   25471 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 18:55:46.435306   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 18:55:46.435328   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 18:55:46.435349   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:55:46.435372   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 18:55:46.435443   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:55:46.435473   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:55:46.435485   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 18:55:46.435498   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 18:55:46.436039   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:55:46.461729   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 18:55:46.484922   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:55:46.507667   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 18:55:46.530575   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 18:55:46.553533   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 18:55:46.576503   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:55:46.599561   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 18:55:46.622811   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:55:46.646083   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 18:55:46.668996   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 18:55:46.691751   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 18:55:46.707948   25471 ssh_runner.go:195] Run: openssl version
	I0818 18:55:46.713703   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:55:46.723993   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:55:46.728627   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:55:46.728687   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:55:46.734384   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:55:46.744236   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 18:55:46.754073   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 18:55:46.758539   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 18:55:46.758577   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 18:55:46.763947   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 18:55:46.776756   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 18:55:46.787133   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 18:55:46.798849   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 18:55:46.798893   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 18:55:46.807702   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 18:55:46.821923   25471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:55:46.829286   25471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 18:55:46.829344   25471 kubeadm.go:392] StartCluster: {Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:55:46.829419   25471 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 18:55:46.829485   25471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 18:55:46.867204   25471 cri.go:89] found id: ""
	I0818 18:55:46.867284   25471 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 18:55:46.877047   25471 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 18:55:46.886645   25471 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 18:55:46.895945   25471 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 18:55:46.895969   25471 kubeadm.go:157] found existing configuration files:
	
	I0818 18:55:46.896022   25471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 18:55:46.905063   25471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 18:55:46.905127   25471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 18:55:46.914364   25471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 18:55:46.922916   25471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 18:55:46.922973   25471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 18:55:46.932232   25471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 18:55:46.940809   25471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 18:55:46.940871   25471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 18:55:46.949854   25471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 18:55:46.959016   25471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 18:55:46.959065   25471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 18:55:46.968021   25471 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 18:55:47.066601   25471 kubeadm.go:310] W0818 18:55:47.044986     857 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:55:47.067020   25471 kubeadm.go:310] W0818 18:55:47.046006     857 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:55:47.182288   25471 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 18:55:58.150958   25471 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 18:55:58.151022   25471 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 18:55:58.151115   25471 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 18:55:58.151230   25471 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 18:55:58.151364   25471 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 18:55:58.151477   25471 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 18:55:58.152947   25471 out.go:235]   - Generating certificates and keys ...
	I0818 18:55:58.153024   25471 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 18:55:58.153081   25471 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 18:55:58.153137   25471 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0818 18:55:58.153208   25471 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0818 18:55:58.153286   25471 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0818 18:55:58.153337   25471 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0818 18:55:58.153388   25471 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0818 18:55:58.153498   25471 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-189125 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	I0818 18:55:58.153558   25471 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0818 18:55:58.153695   25471 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-189125 localhost] and IPs [192.168.39.49 127.0.0.1 ::1]
	I0818 18:55:58.153774   25471 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0818 18:55:58.153828   25471 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0818 18:55:58.153873   25471 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0818 18:55:58.153920   25471 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 18:55:58.153965   25471 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 18:55:58.154013   25471 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 18:55:58.154064   25471 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 18:55:58.154118   25471 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 18:55:58.154172   25471 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 18:55:58.154267   25471 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 18:55:58.154330   25471 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 18:55:58.155738   25471 out.go:235]   - Booting up control plane ...
	I0818 18:55:58.155813   25471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 18:55:58.155881   25471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 18:55:58.155940   25471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 18:55:58.156038   25471 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 18:55:58.156124   25471 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 18:55:58.156161   25471 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 18:55:58.156304   25471 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 18:55:58.156415   25471 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 18:55:58.156471   25471 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001537715s
	I0818 18:55:58.156531   25471 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 18:55:58.156595   25471 kubeadm.go:310] [api-check] The API server is healthy after 5.648292247s
	I0818 18:55:58.156762   25471 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 18:55:58.156912   25471 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 18:55:58.156979   25471 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 18:55:58.157229   25471 kubeadm.go:310] [mark-control-plane] Marking the node ha-189125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 18:55:58.157294   25471 kubeadm.go:310] [bootstrap-token] Using token: aoujqn.tyz3etdztt4uivkk
	I0818 18:55:58.158504   25471 out.go:235]   - Configuring RBAC rules ...
	I0818 18:55:58.158635   25471 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 18:55:58.158736   25471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 18:55:58.158903   25471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 18:55:58.159049   25471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 18:55:58.159158   25471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 18:55:58.159242   25471 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 18:55:58.159370   25471 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 18:55:58.159445   25471 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 18:55:58.159514   25471 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 18:55:58.159524   25471 kubeadm.go:310] 
	I0818 18:55:58.159603   25471 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 18:55:58.159612   25471 kubeadm.go:310] 
	I0818 18:55:58.159723   25471 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 18:55:58.159734   25471 kubeadm.go:310] 
	I0818 18:55:58.159768   25471 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 18:55:58.159842   25471 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 18:55:58.159912   25471 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 18:55:58.159922   25471 kubeadm.go:310] 
	I0818 18:55:58.159996   25471 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 18:55:58.160005   25471 kubeadm.go:310] 
	I0818 18:55:58.160062   25471 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 18:55:58.160073   25471 kubeadm.go:310] 
	I0818 18:55:58.160150   25471 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 18:55:58.160270   25471 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 18:55:58.160362   25471 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 18:55:58.160373   25471 kubeadm.go:310] 
	I0818 18:55:58.160484   25471 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 18:55:58.160595   25471 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 18:55:58.160606   25471 kubeadm.go:310] 
	I0818 18:55:58.160725   25471 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token aoujqn.tyz3etdztt4uivkk \
	I0818 18:55:58.160870   25471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 18:55:58.160911   25471 kubeadm.go:310] 	--control-plane 
	I0818 18:55:58.160923   25471 kubeadm.go:310] 
	I0818 18:55:58.161036   25471 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 18:55:58.161044   25471 kubeadm.go:310] 
	I0818 18:55:58.161175   25471 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token aoujqn.tyz3etdztt4uivkk \
	I0818 18:55:58.161323   25471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 18:55:58.161341   25471 cni.go:84] Creating CNI manager for ""
	I0818 18:55:58.161351   25471 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0818 18:55:58.162711   25471 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0818 18:55:58.163822   25471 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0818 18:55:58.169203   25471 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0818 18:55:58.169218   25471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0818 18:55:58.188127   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0818 18:55:58.622841   25471 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 18:55:58.622934   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:55:58.622961   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-189125 minikube.k8s.io/updated_at=2024_08_18T18_55_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=ha-189125 minikube.k8s.io/primary=true
	I0818 18:55:58.669712   25471 ops.go:34] apiserver oom_adj: -16
	I0818 18:55:58.847689   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:55:59.348324   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:55:59.848273   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:00.348616   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:00.848398   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:01.348467   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:01.848101   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:56:01.957872   25471 kubeadm.go:1113] duration metric: took 3.335030876s to wait for elevateKubeSystemPrivileges
	I0818 18:56:01.957911   25471 kubeadm.go:394] duration metric: took 15.128570088s to StartCluster
	I0818 18:56:01.957932   25471 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:01.958011   25471 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:56:01.959069   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:01.959305   25471 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:56:01.959332   25471 start.go:241] waiting for startup goroutines ...
	I0818 18:56:01.959367   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0818 18:56:01.959349   25471 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 18:56:01.959443   25471 addons.go:69] Setting storage-provisioner=true in profile "ha-189125"
	I0818 18:56:01.959478   25471 addons.go:69] Setting default-storageclass=true in profile "ha-189125"
	I0818 18:56:01.959523   25471 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-189125"
	I0818 18:56:01.959550   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:56:01.959481   25471 addons.go:234] Setting addon storage-provisioner=true in "ha-189125"
	I0818 18:56:01.959623   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:56:01.960014   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.960064   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.960149   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.960186   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.974624   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I0818 18:56:01.974795   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41417
	I0818 18:56:01.975220   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:01.975278   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:01.975728   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:01.975741   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:01.975859   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:01.975884   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:01.976041   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:01.976197   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:01.976216   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:56:01.976789   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.976834   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.978185   25471 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:56:01.978537   25471 kapi.go:59] client config for ha-189125: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key", CAFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0818 18:56:01.979072   25471 cert_rotation.go:140] Starting client certificate rotation controller
	I0818 18:56:01.979366   25471 addons.go:234] Setting addon default-storageclass=true in "ha-189125"
	I0818 18:56:01.979420   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:56:01.979831   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.979874   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.992481   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39103
	I0818 18:56:01.992968   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:01.993498   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:01.993523   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:01.993702   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42477
	I0818 18:56:01.993872   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:01.994043   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:56:01.994052   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:01.994550   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:01.994572   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:01.994896   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:01.995476   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:01.995514   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:01.996022   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:56:01.998223   25471 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 18:56:01.999531   25471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:56:01.999552   25471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 18:56:01.999571   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:56:02.002114   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:02.002476   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:56:02.002511   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:02.002741   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:56:02.002920   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:56:02.003052   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:56:02.003184   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:56:02.010991   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37909
	I0818 18:56:02.011365   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:02.011800   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:02.011815   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:02.012069   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:02.012271   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:56:02.013525   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:56:02.013735   25471 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 18:56:02.013748   25471 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 18:56:02.013760   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:56:02.016129   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:02.016506   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:56:02.016533   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:02.016668   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:56:02.016814   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:56:02.016927   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:56:02.017043   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:56:02.088071   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0818 18:56:02.197400   25471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:56:02.245760   25471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 18:56:02.619549   25471 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0818 18:56:02.816515   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.816555   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.816600   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.816622   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.816851   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.816869   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.816879   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.816887   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.816929   25471 main.go:141] libmachine: (ha-189125) DBG | Closing plugin on server side
	I0818 18:56:02.817129   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.817149   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.817162   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.817150   25471 main.go:141] libmachine: (ha-189125) DBG | Closing plugin on server side
	I0818 18:56:02.817177   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.817195   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.817228   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.818528   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.818541   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.818614   25471 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0818 18:56:02.818637   25471 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0818 18:56:02.818726   25471 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0818 18:56:02.818738   25471 round_trippers.go:469] Request Headers:
	I0818 18:56:02.818748   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:56:02.818754   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:56:02.831248   25471 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0818 18:56:02.831902   25471 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0818 18:56:02.831917   25471 round_trippers.go:469] Request Headers:
	I0818 18:56:02.831924   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:56:02.831928   25471 round_trippers.go:473]     Content-Type: application/json
	I0818 18:56:02.831931   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:56:02.834381   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:56:02.834513   25471 main.go:141] libmachine: Making call to close driver server
	I0818 18:56:02.834524   25471 main.go:141] libmachine: (ha-189125) Calling .Close
	I0818 18:56:02.834753   25471 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:56:02.834776   25471 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:56:02.834792   25471 main.go:141] libmachine: (ha-189125) DBG | Closing plugin on server side
	I0818 18:56:02.836769   25471 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0818 18:56:02.838039   25471 addons.go:510] duration metric: took 878.697589ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0818 18:56:02.838067   25471 start.go:246] waiting for cluster config update ...
	I0818 18:56:02.838086   25471 start.go:255] writing updated cluster config ...
	I0818 18:56:02.839659   25471 out.go:201] 
	I0818 18:56:02.841145   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:56:02.841216   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:56:02.842895   25471 out.go:177] * Starting "ha-189125-m02" control-plane node in "ha-189125" cluster
	I0818 18:56:02.844170   25471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:56:02.844192   25471 cache.go:56] Caching tarball of preloaded images
	I0818 18:56:02.844277   25471 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 18:56:02.844287   25471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 18:56:02.844364   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:56:02.844521   25471 start.go:360] acquireMachinesLock for ha-189125-m02: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 18:56:02.844558   25471 start.go:364] duration metric: took 20.894µs to acquireMachinesLock for "ha-189125-m02"
	I0818 18:56:02.844574   25471 start.go:93] Provisioning new machine with config: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:56:02.844640   25471 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0818 18:56:02.846148   25471 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 18:56:02.846236   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:02.846268   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:02.860808   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37495
	I0818 18:56:02.861235   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:02.861702   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:02.861721   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:02.861996   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:02.862198   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetMachineName
	I0818 18:56:02.862368   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:02.862553   25471 start.go:159] libmachine.API.Create for "ha-189125" (driver="kvm2")
	I0818 18:56:02.862577   25471 client.go:168] LocalClient.Create starting
	I0818 18:56:02.862602   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 18:56:02.862633   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:56:02.862647   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:56:02.862692   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 18:56:02.862710   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:56:02.862721   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:56:02.862734   25471 main.go:141] libmachine: Running pre-create checks...
	I0818 18:56:02.862741   25471 main.go:141] libmachine: (ha-189125-m02) Calling .PreCreateCheck
	I0818 18:56:02.862917   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetConfigRaw
	I0818 18:56:02.863279   25471 main.go:141] libmachine: Creating machine...
	I0818 18:56:02.863298   25471 main.go:141] libmachine: (ha-189125-m02) Calling .Create
	I0818 18:56:02.863621   25471 main.go:141] libmachine: (ha-189125-m02) Creating KVM machine...
	I0818 18:56:02.864840   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found existing default KVM network
	I0818 18:56:02.865009   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found existing private KVM network mk-ha-189125
	I0818 18:56:02.865153   25471 main.go:141] libmachine: (ha-189125-m02) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02 ...
	I0818 18:56:02.865175   25471 main.go:141] libmachine: (ha-189125-m02) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:56:02.865197   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:02.865122   25839 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:56:02.865310   25471 main.go:141] libmachine: (ha-189125-m02) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 18:56:03.088149   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:03.087993   25839 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa...
	I0818 18:56:03.305944   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:03.305815   25839 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/ha-189125-m02.rawdisk...
	I0818 18:56:03.305967   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Writing magic tar header
	I0818 18:56:03.305977   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Writing SSH key tar header
	I0818 18:56:03.305985   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:03.305935   25839 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02 ...
	I0818 18:56:03.306074   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02
	I0818 18:56:03.306108   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02 (perms=drwx------)
	I0818 18:56:03.306118   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 18:56:03.306129   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 18:56:03.306148   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 18:56:03.306157   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 18:56:03.306168   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 18:56:03.306178   25471 main.go:141] libmachine: (ha-189125-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 18:56:03.306193   25471 main.go:141] libmachine: (ha-189125-m02) Creating domain...
	I0818 18:56:03.306202   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:56:03.306209   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 18:56:03.306220   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 18:56:03.306238   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home/jenkins
	I0818 18:56:03.306249   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Checking permissions on dir: /home
	I0818 18:56:03.306261   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Skipping /home - not owner
	I0818 18:56:03.307282   25471 main.go:141] libmachine: (ha-189125-m02) define libvirt domain using xml: 
	I0818 18:56:03.307307   25471 main.go:141] libmachine: (ha-189125-m02) <domain type='kvm'>
	I0818 18:56:03.307318   25471 main.go:141] libmachine: (ha-189125-m02)   <name>ha-189125-m02</name>
	I0818 18:56:03.307334   25471 main.go:141] libmachine: (ha-189125-m02)   <memory unit='MiB'>2200</memory>
	I0818 18:56:03.307347   25471 main.go:141] libmachine: (ha-189125-m02)   <vcpu>2</vcpu>
	I0818 18:56:03.307357   25471 main.go:141] libmachine: (ha-189125-m02)   <features>
	I0818 18:56:03.307368   25471 main.go:141] libmachine: (ha-189125-m02)     <acpi/>
	I0818 18:56:03.307392   25471 main.go:141] libmachine: (ha-189125-m02)     <apic/>
	I0818 18:56:03.307405   25471 main.go:141] libmachine: (ha-189125-m02)     <pae/>
	I0818 18:56:03.307416   25471 main.go:141] libmachine: (ha-189125-m02)     
	I0818 18:56:03.307425   25471 main.go:141] libmachine: (ha-189125-m02)   </features>
	I0818 18:56:03.307435   25471 main.go:141] libmachine: (ha-189125-m02)   <cpu mode='host-passthrough'>
	I0818 18:56:03.307445   25471 main.go:141] libmachine: (ha-189125-m02)   
	I0818 18:56:03.307456   25471 main.go:141] libmachine: (ha-189125-m02)   </cpu>
	I0818 18:56:03.307468   25471 main.go:141] libmachine: (ha-189125-m02)   <os>
	I0818 18:56:03.307476   25471 main.go:141] libmachine: (ha-189125-m02)     <type>hvm</type>
	I0818 18:56:03.307488   25471 main.go:141] libmachine: (ha-189125-m02)     <boot dev='cdrom'/>
	I0818 18:56:03.307503   25471 main.go:141] libmachine: (ha-189125-m02)     <boot dev='hd'/>
	I0818 18:56:03.307515   25471 main.go:141] libmachine: (ha-189125-m02)     <bootmenu enable='no'/>
	I0818 18:56:03.307539   25471 main.go:141] libmachine: (ha-189125-m02)   </os>
	I0818 18:56:03.307552   25471 main.go:141] libmachine: (ha-189125-m02)   <devices>
	I0818 18:56:03.307564   25471 main.go:141] libmachine: (ha-189125-m02)     <disk type='file' device='cdrom'>
	I0818 18:56:03.307597   25471 main.go:141] libmachine: (ha-189125-m02)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/boot2docker.iso'/>
	I0818 18:56:03.307622   25471 main.go:141] libmachine: (ha-189125-m02)       <target dev='hdc' bus='scsi'/>
	I0818 18:56:03.307635   25471 main.go:141] libmachine: (ha-189125-m02)       <readonly/>
	I0818 18:56:03.307645   25471 main.go:141] libmachine: (ha-189125-m02)     </disk>
	I0818 18:56:03.307657   25471 main.go:141] libmachine: (ha-189125-m02)     <disk type='file' device='disk'>
	I0818 18:56:03.307669   25471 main.go:141] libmachine: (ha-189125-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 18:56:03.307686   25471 main.go:141] libmachine: (ha-189125-m02)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/ha-189125-m02.rawdisk'/>
	I0818 18:56:03.307701   25471 main.go:141] libmachine: (ha-189125-m02)       <target dev='hda' bus='virtio'/>
	I0818 18:56:03.307713   25471 main.go:141] libmachine: (ha-189125-m02)     </disk>
	I0818 18:56:03.307723   25471 main.go:141] libmachine: (ha-189125-m02)     <interface type='network'>
	I0818 18:56:03.307735   25471 main.go:141] libmachine: (ha-189125-m02)       <source network='mk-ha-189125'/>
	I0818 18:56:03.307748   25471 main.go:141] libmachine: (ha-189125-m02)       <model type='virtio'/>
	I0818 18:56:03.307760   25471 main.go:141] libmachine: (ha-189125-m02)     </interface>
	I0818 18:56:03.307775   25471 main.go:141] libmachine: (ha-189125-m02)     <interface type='network'>
	I0818 18:56:03.307788   25471 main.go:141] libmachine: (ha-189125-m02)       <source network='default'/>
	I0818 18:56:03.307799   25471 main.go:141] libmachine: (ha-189125-m02)       <model type='virtio'/>
	I0818 18:56:03.307823   25471 main.go:141] libmachine: (ha-189125-m02)     </interface>
	I0818 18:56:03.307834   25471 main.go:141] libmachine: (ha-189125-m02)     <serial type='pty'>
	I0818 18:56:03.307866   25471 main.go:141] libmachine: (ha-189125-m02)       <target port='0'/>
	I0818 18:56:03.307888   25471 main.go:141] libmachine: (ha-189125-m02)     </serial>
	I0818 18:56:03.307901   25471 main.go:141] libmachine: (ha-189125-m02)     <console type='pty'>
	I0818 18:56:03.307912   25471 main.go:141] libmachine: (ha-189125-m02)       <target type='serial' port='0'/>
	I0818 18:56:03.307924   25471 main.go:141] libmachine: (ha-189125-m02)     </console>
	I0818 18:56:03.307934   25471 main.go:141] libmachine: (ha-189125-m02)     <rng model='virtio'>
	I0818 18:56:03.307945   25471 main.go:141] libmachine: (ha-189125-m02)       <backend model='random'>/dev/random</backend>
	I0818 18:56:03.307959   25471 main.go:141] libmachine: (ha-189125-m02)     </rng>
	I0818 18:56:03.307969   25471 main.go:141] libmachine: (ha-189125-m02)     
	I0818 18:56:03.307979   25471 main.go:141] libmachine: (ha-189125-m02)     
	I0818 18:56:03.307987   25471 main.go:141] libmachine: (ha-189125-m02)   </devices>
	I0818 18:56:03.307996   25471 main.go:141] libmachine: (ha-189125-m02) </domain>
	I0818 18:56:03.308012   25471 main.go:141] libmachine: (ha-189125-m02) 
	I0818 18:56:03.315735   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:bf:d4:3e in network default
	I0818 18:56:03.316418   25471 main.go:141] libmachine: (ha-189125-m02) Ensuring networks are active...
	I0818 18:56:03.316447   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:03.317294   25471 main.go:141] libmachine: (ha-189125-m02) Ensuring network default is active
	I0818 18:56:03.317698   25471 main.go:141] libmachine: (ha-189125-m02) Ensuring network mk-ha-189125 is active
	I0818 18:56:03.318186   25471 main.go:141] libmachine: (ha-189125-m02) Getting domain xml...
	I0818 18:56:03.318992   25471 main.go:141] libmachine: (ha-189125-m02) Creating domain...
	I0818 18:56:04.549654   25471 main.go:141] libmachine: (ha-189125-m02) Waiting to get IP...
	I0818 18:56:04.550438   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:04.550841   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:04.550906   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:04.550836   25839 retry.go:31] will retry after 189.70945ms: waiting for machine to come up
	I0818 18:56:04.742242   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:04.742892   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:04.742917   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:04.742851   25839 retry.go:31] will retry after 306.441708ms: waiting for machine to come up
	I0818 18:56:05.051422   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:05.051867   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:05.051894   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:05.051822   25839 retry.go:31] will retry after 309.375385ms: waiting for machine to come up
	I0818 18:56:05.362202   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:05.362738   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:05.362767   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:05.362696   25839 retry.go:31] will retry after 531.292093ms: waiting for machine to come up
	I0818 18:56:05.895365   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:05.895790   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:05.895817   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:05.895741   25839 retry.go:31] will retry after 476.983941ms: waiting for machine to come up
	I0818 18:56:06.374351   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:06.374784   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:06.374814   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:06.374725   25839 retry.go:31] will retry after 760.550106ms: waiting for machine to come up
	I0818 18:56:07.136601   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:07.137029   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:07.137052   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:07.137001   25839 retry.go:31] will retry after 833.085885ms: waiting for machine to come up
	I0818 18:56:07.972109   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:07.972719   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:07.972743   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:07.972679   25839 retry.go:31] will retry after 1.213935964s: waiting for machine to come up
	I0818 18:56:09.188185   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:09.188647   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:09.188676   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:09.188614   25839 retry.go:31] will retry after 1.477368217s: waiting for machine to come up
	I0818 18:56:10.668113   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:10.668564   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:10.668590   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:10.668514   25839 retry.go:31] will retry after 2.1955723s: waiting for machine to come up
	I0818 18:56:12.865446   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:12.865894   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:12.865922   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:12.865849   25839 retry.go:31] will retry after 1.867147502s: waiting for machine to come up
	I0818 18:56:14.734272   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:14.734703   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:14.734732   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:14.734657   25839 retry.go:31] will retry after 2.346085082s: waiting for machine to come up
	I0818 18:56:17.084059   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:17.084444   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:17.084475   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:17.084418   25839 retry.go:31] will retry after 3.612682767s: waiting for machine to come up
	I0818 18:56:20.700361   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:20.700713   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find current IP address of domain ha-189125-m02 in network mk-ha-189125
	I0818 18:56:20.700734   25471 main.go:141] libmachine: (ha-189125-m02) DBG | I0818 18:56:20.700687   25839 retry.go:31] will retry after 3.880590162s: waiting for machine to come up
	I0818 18:56:24.583447   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.584008   25471 main.go:141] libmachine: (ha-189125-m02) Found IP for machine: 192.168.39.147
	I0818 18:56:24.584031   25471 main.go:141] libmachine: (ha-189125-m02) Reserving static IP address...
	I0818 18:56:24.584045   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has current primary IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.584474   25471 main.go:141] libmachine: (ha-189125-m02) DBG | unable to find host DHCP lease matching {name: "ha-189125-m02", mac: "52:54:00:a7:f4:4c", ip: "192.168.39.147"} in network mk-ha-189125
	I0818 18:56:24.655647   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Getting to WaitForSSH function...
	I0818 18:56:24.655687   25471 main.go:141] libmachine: (ha-189125-m02) Reserved static IP address: 192.168.39.147
	I0818 18:56:24.655700   25471 main.go:141] libmachine: (ha-189125-m02) Waiting for SSH to be available...
	I0818 18:56:24.658246   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.658606   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:24.658635   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.658782   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Using SSH client type: external
	I0818 18:56:24.658806   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa (-rw-------)
	I0818 18:56:24.658829   25471 main.go:141] libmachine: (ha-189125-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 18:56:24.658839   25471 main.go:141] libmachine: (ha-189125-m02) DBG | About to run SSH command:
	I0818 18:56:24.658850   25471 main.go:141] libmachine: (ha-189125-m02) DBG | exit 0
	I0818 18:56:24.783851   25471 main.go:141] libmachine: (ha-189125-m02) DBG | SSH cmd err, output: <nil>: 
	I0818 18:56:24.784144   25471 main.go:141] libmachine: (ha-189125-m02) KVM machine creation complete!
	I0818 18:56:24.784456   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetConfigRaw
	I0818 18:56:24.784958   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:24.785135   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:24.785312   25471 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 18:56:24.785327   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 18:56:24.786656   25471 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 18:56:24.786669   25471 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 18:56:24.786675   25471 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 18:56:24.786680   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:24.788953   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.789330   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:24.789370   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.789542   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:24.789726   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:24.789897   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:24.790075   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:24.790250   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:24.790448   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:24.790460   25471 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 18:56:24.894553   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:56:24.894576   25471 main.go:141] libmachine: Detecting the provisioner...
	I0818 18:56:24.894600   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:24.897373   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.897739   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:24.897767   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:24.897909   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:24.898119   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:24.898243   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:24.898374   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:24.898524   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:24.898690   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:24.898963   25471 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 18:56:25.004189   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 18:56:25.004258   25471 main.go:141] libmachine: found compatible host: buildroot
	I0818 18:56:25.004271   25471 main.go:141] libmachine: Provisioning with buildroot...
	I0818 18:56:25.004284   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetMachineName
	I0818 18:56:25.004538   25471 buildroot.go:166] provisioning hostname "ha-189125-m02"
	I0818 18:56:25.004566   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetMachineName
	I0818 18:56:25.004753   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.007197   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.007543   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.007568   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.007762   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.007935   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.008072   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.008219   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.008374   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:25.008550   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:25.008567   25471 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-189125-m02 && echo "ha-189125-m02" | sudo tee /etc/hostname
	I0818 18:56:25.129102   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125-m02
	
	I0818 18:56:25.129132   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.131946   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.132268   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.132304   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.132456   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.132643   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.132782   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.132898   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.133023   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:25.133174   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:25.133188   25471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-189125-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-189125-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-189125-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:56:25.244663   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:56:25.244698   25471 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 18:56:25.244714   25471 buildroot.go:174] setting up certificates
	I0818 18:56:25.244721   25471 provision.go:84] configureAuth start
	I0818 18:56:25.244729   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetMachineName
	I0818 18:56:25.245016   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 18:56:25.247751   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.248104   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.248134   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.248323   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.250652   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.250985   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.251013   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.251128   25471 provision.go:143] copyHostCerts
	I0818 18:56:25.251158   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:56:25.251197   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 18:56:25.251206   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:56:25.251273   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 18:56:25.251345   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:56:25.251362   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 18:56:25.251368   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:56:25.251415   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 18:56:25.251475   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:56:25.251492   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 18:56:25.251498   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:56:25.251521   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 18:56:25.251570   25471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.ha-189125-m02 san=[127.0.0.1 192.168.39.147 ha-189125-m02 localhost minikube]
	I0818 18:56:25.348489   25471 provision.go:177] copyRemoteCerts
	I0818 18:56:25.348544   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:56:25.348565   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.351281   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.351657   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.351684   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.351832   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.352062   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.352236   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.352411   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 18:56:25.433192   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 18:56:25.433263   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 18:56:25.457661   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 18:56:25.457729   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 18:56:25.481448   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 18:56:25.481512   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 18:56:25.506378   25471 provision.go:87] duration metric: took 261.641684ms to configureAuth
	I0818 18:56:25.506402   25471 buildroot.go:189] setting minikube options for container-runtime
	I0818 18:56:25.506577   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:56:25.506654   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.509394   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.509727   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.509748   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.509944   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.510145   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.510350   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.510528   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.510710   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:25.510915   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:25.510932   25471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 18:56:25.780823   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 18:56:25.780847   25471 main.go:141] libmachine: Checking connection to Docker...
	I0818 18:56:25.780858   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetURL
	I0818 18:56:25.782093   25471 main.go:141] libmachine: (ha-189125-m02) DBG | Using libvirt version 6000000
	I0818 18:56:25.784160   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.784520   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.784545   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.784694   25471 main.go:141] libmachine: Docker is up and running!
	I0818 18:56:25.784708   25471 main.go:141] libmachine: Reticulating splines...
	I0818 18:56:25.784714   25471 client.go:171] duration metric: took 22.922131138s to LocalClient.Create
	I0818 18:56:25.784733   25471 start.go:167] duration metric: took 22.92218128s to libmachine.API.Create "ha-189125"
	I0818 18:56:25.784742   25471 start.go:293] postStartSetup for "ha-189125-m02" (driver="kvm2")
	I0818 18:56:25.784751   25471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:56:25.784774   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:25.785002   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:56:25.785025   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.787001   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.787336   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.787358   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.787513   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.787674   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.787823   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.787921   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 18:56:25.870456   25471 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:56:25.874913   25471 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 18:56:25.874936   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 18:56:25.874999   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 18:56:25.875070   25471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 18:56:25.875082   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 18:56:25.875195   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 18:56:25.884827   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:56:25.908555   25471 start.go:296] duration metric: took 123.800351ms for postStartSetup
	I0818 18:56:25.908610   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetConfigRaw
	I0818 18:56:25.909271   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 18:56:25.911557   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.911891   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.911912   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.912185   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:56:25.912355   25471 start.go:128] duration metric: took 23.067706224s to createHost
	I0818 18:56:25.912374   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:25.914769   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.915089   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:25.915110   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:25.915290   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:25.915475   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.915634   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:25.915735   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:25.915859   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:56:25.916006   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.147 22 <nil> <nil>}
	I0818 18:56:25.916015   25471 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 18:56:26.020357   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007385.993920413
	
	I0818 18:56:26.020378   25471 fix.go:216] guest clock: 1724007385.993920413
	I0818 18:56:26.020389   25471 fix.go:229] Guest: 2024-08-18 18:56:25.993920413 +0000 UTC Remote: 2024-08-18 18:56:25.912365204 +0000 UTC m=+69.117362276 (delta=81.555209ms)
	I0818 18:56:26.020415   25471 fix.go:200] guest clock delta is within tolerance: 81.555209ms
	I0818 18:56:26.020423   25471 start.go:83] releasing machines lock for "ha-189125-m02", held for 23.175855754s
	I0818 18:56:26.020453   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:26.020678   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 18:56:26.023373   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.023750   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:26.023771   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.025861   25471 out.go:177] * Found network options:
	I0818 18:56:26.027004   25471 out.go:177]   - NO_PROXY=192.168.39.49
	W0818 18:56:26.028085   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 18:56:26.028108   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:26.028609   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:26.028784   25471 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 18:56:26.028868   25471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:56:26.028905   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	W0818 18:56:26.028976   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 18:56:26.029055   25471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 18:56:26.029075   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 18:56:26.031162   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.031411   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.031559   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:26.031585   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.031718   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:26.031722   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:26.031744   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:26.031920   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 18:56:26.031922   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:26.032129   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:26.032136   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 18:56:26.032271   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 18:56:26.032370   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 18:56:26.032570   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 18:56:26.266298   25471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 18:56:26.272330   25471 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 18:56:26.272391   25471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:56:26.288956   25471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 18:56:26.288976   25471 start.go:495] detecting cgroup driver to use...
	I0818 18:56:26.289039   25471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 18:56:26.311860   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:56:26.326557   25471 docker.go:217] disabling cri-docker service (if available) ...
	I0818 18:56:26.326620   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 18:56:26.340258   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 18:56:26.354673   25471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 18:56:26.473057   25471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 18:56:26.649348   25471 docker.go:233] disabling docker service ...
	I0818 18:56:26.649425   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 18:56:26.664482   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 18:56:26.677312   25471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 18:56:26.798114   25471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 18:56:26.922521   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 18:56:26.937473   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:56:26.956873   25471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 18:56:26.956927   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:26.967554   25471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 18:56:26.967611   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:26.978405   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:26.989175   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:27.000397   25471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:56:27.011882   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:27.022693   25471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:27.040262   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:56:27.050444   25471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:56:27.059996   25471 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 18:56:27.060055   25471 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 18:56:27.073043   25471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:56:27.083033   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:56:27.201750   25471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 18:56:27.338450   25471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 18:56:27.338508   25471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 18:56:27.343146   25471 start.go:563] Will wait 60s for crictl version
	I0818 18:56:27.343198   25471 ssh_runner.go:195] Run: which crictl
	I0818 18:56:27.346822   25471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:56:27.386415   25471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 18:56:27.386485   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:56:27.414020   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:56:27.444917   25471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 18:56:27.446412   25471 out.go:177]   - env NO_PROXY=192.168.39.49
	I0818 18:56:27.447903   25471 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 18:56:27.450438   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:27.450780   25471 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:56:17 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 18:56:27.450813   25471 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 18:56:27.451015   25471 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 18:56:27.455183   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:56:27.469397   25471 mustload.go:65] Loading cluster: ha-189125
	I0818 18:56:27.469602   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:56:27.469905   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:27.469937   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:27.484830   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40151
	I0818 18:56:27.485314   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:27.485898   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:27.485928   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:27.486280   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:27.486456   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:56:27.488234   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:56:27.488577   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:56:27.488602   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:56:27.505149   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I0818 18:56:27.505577   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:56:27.506048   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:56:27.506067   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:56:27.506382   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:56:27.506573   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:56:27.506738   25471 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125 for IP: 192.168.39.147
	I0818 18:56:27.506749   25471 certs.go:194] generating shared ca certs ...
	I0818 18:56:27.506761   25471 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:27.506890   25471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 18:56:27.506946   25471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 18:56:27.506963   25471 certs.go:256] generating profile certs ...
	I0818 18:56:27.507060   25471 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key
	I0818 18:56:27.507093   25471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ccfd3871
	I0818 18:56:27.507115   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ccfd3871 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.49 192.168.39.147 192.168.39.254]
	I0818 18:56:27.776824   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ccfd3871 ...
	I0818 18:56:27.776851   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ccfd3871: {Name:mk693f24e6c521c769dd1a90fa61ded18ba545f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:27.777012   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ccfd3871 ...
	I0818 18:56:27.777025   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ccfd3871: {Name:mk5801ce96a42bd9b95bdbb774232e6a93638a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:56:27.777103   25471 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ccfd3871 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt
	I0818 18:56:27.777230   25471 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ccfd3871 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key
	I0818 18:56:27.777352   25471 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key
	I0818 18:56:27.777366   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 18:56:27.777378   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 18:56:27.777391   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 18:56:27.777405   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 18:56:27.777417   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 18:56:27.777429   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 18:56:27.777443   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 18:56:27.777455   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 18:56:27.777501   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 18:56:27.777528   25471 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 18:56:27.777538   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 18:56:27.777559   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 18:56:27.777579   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:56:27.777599   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 18:56:27.777634   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:56:27.777660   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:56:27.777673   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 18:56:27.777685   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 18:56:27.777715   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:56:27.780880   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:27.781262   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:56:27.781290   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:56:27.781490   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:56:27.781664   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:56:27.781829   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:56:27.781920   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:56:27.851820   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 18:56:27.857165   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 18:56:27.868927   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 18:56:27.873073   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0818 18:56:27.888998   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 18:56:27.893623   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 18:56:27.906221   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 18:56:27.911138   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 18:56:27.924038   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 18:56:27.928729   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 18:56:27.939534   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 18:56:27.944256   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 18:56:27.955028   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:56:27.981285   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 18:56:28.005203   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:56:28.029382   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 18:56:28.053677   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0818 18:56:28.077371   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 18:56:28.101987   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:56:28.126475   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 18:56:28.150219   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:56:28.173489   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 18:56:28.197046   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 18:56:28.222079   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 18:56:28.239062   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0818 18:56:28.255936   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 18:56:28.273293   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 18:56:28.289535   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 18:56:28.306186   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 18:56:28.322487   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 18:56:28.340403   25471 ssh_runner.go:195] Run: openssl version
	I0818 18:56:28.346165   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:56:28.357299   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:56:28.362092   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:56:28.362148   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:56:28.368013   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:56:28.379308   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 18:56:28.390732   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 18:56:28.395653   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 18:56:28.395706   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 18:56:28.401551   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 18:56:28.412455   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 18:56:28.423271   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 18:56:28.427896   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 18:56:28.427947   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 18:56:28.433474   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 18:56:28.444044   25471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:56:28.448173   25471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 18:56:28.448242   25471 kubeadm.go:934] updating node {m02 192.168.39.147 8443 v1.31.0 crio true true} ...
	I0818 18:56:28.448354   25471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-189125-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:56:28.448387   25471 kube-vip.go:115] generating kube-vip config ...
	I0818 18:56:28.448421   25471 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 18:56:28.465207   25471 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 18:56:28.465274   25471 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 18:56:28.465320   25471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:56:28.474939   25471 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0818 18:56:28.474993   25471 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0818 18:56:28.484664   25471 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0818 18:56:28.484693   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0818 18:56:28.484749   25471 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0818 18:56:28.484760   25471 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0818 18:56:28.484773   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0818 18:56:28.489593   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0818 18:56:28.489619   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0818 18:57:07.300938   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0818 18:57:07.301041   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0818 18:57:07.306928   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0818 18:57:07.306960   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0818 18:57:21.679905   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:57:21.694904   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0818 18:57:21.694988   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0818 18:57:21.699099   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0818 18:57:21.699128   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0818 18:57:22.023889   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 18:57:22.033513   25471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0818 18:57:22.050257   25471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:57:22.067666   25471 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0818 18:57:22.084525   25471 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0818 18:57:22.088470   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:57:22.102139   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:57:22.228480   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:57:22.244566   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:57:22.244880   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:57:22.244927   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:57:22.260307   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0818 18:57:22.260759   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:57:22.261222   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:57:22.261241   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:57:22.261547   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:57:22.261798   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:57:22.261960   25471 start.go:317] joinCluster: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:57:22.262102   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0818 18:57:22.262126   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:57:22.265153   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:57:22.265644   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:57:22.265672   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:57:22.265880   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:57:22.266035   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:57:22.266211   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:57:22.266348   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:57:22.413322   25471 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:57:22.413420   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lspzfs.e7jiyw0f2vub7bzi --discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-189125-m02 --control-plane --apiserver-advertise-address=192.168.39.147 --apiserver-bind-port=8443"
	I0818 18:57:43.013800   25471 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lspzfs.e7jiyw0f2vub7bzi --discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-189125-m02 --control-plane --apiserver-advertise-address=192.168.39.147 --apiserver-bind-port=8443": (20.600347191s)
	I0818 18:57:43.013834   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0818 18:57:43.519884   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-189125-m02 minikube.k8s.io/updated_at=2024_08_18T18_57_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=ha-189125 minikube.k8s.io/primary=false
	I0818 18:57:43.625984   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-189125-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0818 18:57:43.732295   25471 start.go:319] duration metric: took 21.47033009s to joinCluster
	I0818 18:57:43.732370   25471 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:57:43.732689   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:57:43.733914   25471 out.go:177] * Verifying Kubernetes components...
	I0818 18:57:43.735137   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:57:43.968889   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:57:44.022903   25471 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:57:44.023165   25471 kapi.go:59] client config for ha-189125: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key", CAFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 18:57:44.023229   25471 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.49:8443
	I0818 18:57:44.023499   25471 node_ready.go:35] waiting up to 6m0s for node "ha-189125-m02" to be "Ready" ...
	I0818 18:57:44.023594   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:44.023605   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:44.023615   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:44.023620   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:44.034010   25471 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0818 18:57:44.523644   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:44.523679   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:44.523687   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:44.523693   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:44.527673   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:45.024085   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:45.024106   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:45.024117   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:45.024122   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:45.028184   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:45.524187   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:45.524216   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:45.524227   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:45.524232   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:45.530294   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:57:46.024367   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:46.024391   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:46.024409   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:46.024414   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:46.028288   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:46.028857   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:46.524311   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:46.524333   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:46.524341   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:46.524345   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:46.527565   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:47.024585   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:47.024605   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:47.024613   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:47.024620   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:47.028783   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:47.524314   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:47.524338   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:47.524419   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:47.524435   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:47.527616   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:48.024174   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:48.024194   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:48.024205   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:48.024210   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:48.029579   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:57:48.030278   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:48.524614   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:48.524636   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:48.524645   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:48.524651   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:48.527520   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:57:49.024621   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:49.024645   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:49.024654   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:49.024662   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:49.028758   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:49.524632   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:49.524654   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:49.524665   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:49.524670   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:49.528158   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:50.024633   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:50.024652   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:50.024660   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:50.024665   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:50.028828   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:50.523807   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:50.523827   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:50.523834   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:50.523837   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:50.527173   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:50.528329   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:51.023735   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:51.023760   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:51.023768   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:51.023774   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:51.028945   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:57:51.523733   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:51.523755   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:51.523765   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:51.523771   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:51.529671   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:57:52.024174   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:52.024197   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:52.024207   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:52.024211   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:52.027681   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:52.524093   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:52.524137   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:52.524145   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:52.524149   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:52.526979   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:57:53.024443   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:53.024464   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:53.024472   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:53.024476   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:53.028194   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:53.028983   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:53.524434   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:53.524456   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:53.524465   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:53.524469   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:53.528476   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:54.023723   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:54.023741   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:54.023748   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:54.023752   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:54.027335   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:54.524350   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:54.524376   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:54.524385   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:54.524388   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:54.528240   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:55.024322   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:55.024343   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:55.024351   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:55.024355   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:55.028532   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:57:55.524451   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:55.524471   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:55.524479   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:55.524483   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:55.528004   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:55.528676   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:56.024036   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:56.024059   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:56.024067   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:56.024071   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:56.026855   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:57:56.524615   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:56.524635   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:56.524643   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:56.524647   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:56.528071   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:57.024053   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:57.024073   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:57.024082   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:57.024088   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:57.027107   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:57.524328   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:57.524346   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:57.524354   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:57.524360   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:57.527464   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:58.023936   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:58.023964   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:58.023974   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:58.023981   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:58.026995   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:57:58.027742   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:57:58.524035   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:58.524057   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:58.524065   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:58.524068   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:58.527280   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:59.024391   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:59.024412   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:59.024420   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:59.024424   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:59.027594   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:57:59.524618   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:57:59.524639   25471 round_trippers.go:469] Request Headers:
	I0818 18:57:59.524651   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:57:59.524656   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:57:59.527690   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:00.024689   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:00.024712   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:00.024720   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:00.024724   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:00.027716   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:00.028252   25471 node_ready.go:53] node "ha-189125-m02" has status "Ready":"False"
	I0818 18:58:00.524681   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:00.524704   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:00.524712   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:00.524716   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:00.527895   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:01.023776   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:01.023800   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:01.023807   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:01.023811   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:01.027204   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:01.524199   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:01.524220   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:01.524228   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:01.524232   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:01.527841   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.023989   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:02.024012   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.024020   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.024024   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.027223   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.524496   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:02.524521   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.524532   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.524537   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.527626   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.528030   25471 node_ready.go:49] node "ha-189125-m02" has status "Ready":"True"
	I0818 18:58:02.528046   25471 node_ready.go:38] duration metric: took 18.504530405s for node "ha-189125-m02" to be "Ready" ...
	I0818 18:58:02.528054   25471 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:58:02.528113   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:02.528122   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.528128   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.528132   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.532615   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:02.538126   25471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.538210   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-7xr26
	I0818 18:58:02.538219   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.538227   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.538230   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.542065   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.542785   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.542802   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.542813   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.542820   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.545807   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.546294   25471 pod_ready.go:93] pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.546315   25471 pod_ready.go:82] duration metric: took 8.164461ms for pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.546327   25471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.546395   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q9j97
	I0818 18:58:02.546406   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.546415   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.546434   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.548550   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.549332   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.549348   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.549354   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.549358   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.552328   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.552829   25471 pod_ready.go:93] pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.552845   25471 pod_ready.go:82] duration metric: took 6.508478ms for pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.552853   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.552899   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125
	I0818 18:58:02.552906   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.552912   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.552919   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.555280   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.556026   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.556043   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.556053   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.556059   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.558355   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.558947   25471 pod_ready.go:93] pod "etcd-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.558964   25471 pod_ready.go:82] duration metric: took 6.101918ms for pod "etcd-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.558975   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.559032   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125-m02
	I0818 18:58:02.559041   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.559052   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.559060   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.561242   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.561942   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:02.561959   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.561968   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.561974   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.564135   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.564594   25471 pod_ready.go:93] pod "etcd-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.564610   25471 pod_ready.go:82] duration metric: took 5.626815ms for pod "etcd-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.564627   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.724985   25471 request.go:632] Waited for 160.28756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125
	I0818 18:58:02.725053   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125
	I0818 18:58:02.725059   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.725067   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.725070   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.728106   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:02.925412   25471 request.go:632] Waited for 196.61739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.925493   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:02.925500   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:02.925510   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:02.925515   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:02.928304   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:02.928779   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:02.928797   25471 pod_ready.go:82] duration metric: took 364.161268ms for pod "kube-apiserver-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:02.928805   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.124917   25471 request.go:632] Waited for 196.044329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m02
	I0818 18:58:03.124971   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m02
	I0818 18:58:03.124977   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.124987   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.124993   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.128374   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:03.325492   25471 request.go:632] Waited for 196.391258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:03.325554   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:03.325559   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.325565   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.325569   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.329364   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:03.329939   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:03.329956   25471 pod_ready.go:82] duration metric: took 401.144525ms for pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.329964   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.525046   25471 request.go:632] Waited for 195.017553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125
	I0818 18:58:03.525118   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125
	I0818 18:58:03.525123   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.525131   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.525138   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.528377   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:03.725379   25471 request.go:632] Waited for 196.368187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:03.725441   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:03.725446   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.725454   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.725462   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.733361   25471 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0818 18:58:03.734061   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:03.734080   25471 pod_ready.go:82] duration metric: took 404.110264ms for pod "kube-controller-manager-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.734090   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:03.925125   25471 request.go:632] Waited for 190.960818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m02
	I0818 18:58:03.925202   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m02
	I0818 18:58:03.925208   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:03.925218   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:03.925236   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:03.929714   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:04.124822   25471 request.go:632] Waited for 194.214505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:04.124871   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:04.124876   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.124883   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.124887   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.128296   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:04.129112   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:04.129130   25471 pod_ready.go:82] duration metric: took 395.033443ms for pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.129139   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-96xwx" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.325322   25471 request.go:632] Waited for 196.121065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-96xwx
	I0818 18:58:04.325386   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-96xwx
	I0818 18:58:04.325394   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.325403   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.325408   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.328746   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:04.525063   25471 request.go:632] Waited for 195.35461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:04.525140   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:04.525150   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.525158   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.525162   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.531029   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:58:04.531510   25471 pod_ready.go:93] pod "kube-proxy-96xwx" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:04.531527   25471 pod_ready.go:82] duration metric: took 402.383581ms for pod "kube-proxy-96xwx" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.531538   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-scwlr" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.725579   25471 request.go:632] Waited for 193.960312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scwlr
	I0818 18:58:04.725647   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scwlr
	I0818 18:58:04.725655   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.725665   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.725675   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.729209   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:04.925234   25471 request.go:632] Waited for 195.408461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:04.925304   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:04.925312   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:04.925322   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:04.925328   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:04.928729   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:04.929254   25471 pod_ready.go:93] pod "kube-proxy-scwlr" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:04.929273   25471 pod_ready.go:82] duration metric: took 397.729124ms for pod "kube-proxy-scwlr" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:04.929282   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:05.125332   25471 request.go:632] Waited for 195.992024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125
	I0818 18:58:05.125402   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125
	I0818 18:58:05.125409   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.125416   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.125429   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.130487   25471 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0818 18:58:05.325393   25471 request.go:632] Waited for 194.358945ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:05.325468   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:58:05.325474   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.325486   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.325492   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.328765   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:05.329221   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:05.329240   25471 pod_ready.go:82] duration metric: took 399.951715ms for pod "kube-scheduler-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:05.329250   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:05.525435   25471 request.go:632] Waited for 196.100576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m02
	I0818 18:58:05.525519   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m02
	I0818 18:58:05.525529   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.525540   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.525551   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.528437   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:58:05.725307   25471 request.go:632] Waited for 196.364215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:05.725376   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:58:05.725381   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.725388   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.725392   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.728475   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:05.728977   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:58:05.728993   25471 pod_ready.go:82] duration metric: took 399.737599ms for pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:58:05.729002   25471 pod_ready.go:39] duration metric: took 3.200938183s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:58:05.729017   25471 api_server.go:52] waiting for apiserver process to appear ...
	I0818 18:58:05.729063   25471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:58:05.744662   25471 api_server.go:72] duration metric: took 22.01225621s to wait for apiserver process to appear ...
	I0818 18:58:05.744688   25471 api_server.go:88] waiting for apiserver healthz status ...
	I0818 18:58:05.744710   25471 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0818 18:58:05.749099   25471 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I0818 18:58:05.749170   25471 round_trippers.go:463] GET https://192.168.39.49:8443/version
	I0818 18:58:05.749182   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.749193   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.749197   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.750281   25471 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0818 18:58:05.750388   25471 api_server.go:141] control plane version: v1.31.0
	I0818 18:58:05.750405   25471 api_server.go:131] duration metric: took 5.710399ms to wait for apiserver health ...
	I0818 18:58:05.750416   25471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 18:58:05.924839   25471 request.go:632] Waited for 174.352065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:05.924890   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:05.924896   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:05.924903   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:05.924907   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:05.929868   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:05.933959   25471 system_pods.go:59] 17 kube-system pods found
	I0818 18:58:05.933986   25471 system_pods.go:61] "coredns-6f6b679f8f-7xr26" [d4354313-0e2d-4d96-9cd1-a8f69a4aee26] Running
	I0818 18:58:05.933992   25471 system_pods.go:61] "coredns-6f6b679f8f-q9j97" [1f1c0597-6624-4a3e-8356-7d23555c2809] Running
	I0818 18:58:05.933996   25471 system_pods.go:61] "etcd-ha-189125" [441d8b87-bb19-479f-86a3-eda66e820a81] Running
	I0818 18:58:05.934000   25471 system_pods.go:61] "etcd-ha-189125-m02" [b656f93e-ece8-41c0-b109-584cf52e7b64] Running
	I0818 18:58:05.934003   25471 system_pods.go:61] "kindnet-jwxjh" [086477c9-e6eb-403e-adc7-b15347918484] Running
	I0818 18:58:05.934006   25471 system_pods.go:61] "kindnet-qhnpv" [b23c4910-6e34-46ec-98f2-60ec7ebdd064] Running
	I0818 18:58:05.934010   25471 system_pods.go:61] "kube-apiserver-ha-189125" [707fe85b-0545-4306-aa6f-22580ddb6203] Running
	I0818 18:58:05.934013   25471 system_pods.go:61] "kube-apiserver-ha-189125-m02" [91926546-4ebb-4e81-a0eb-ffaff8d05fdc] Running
	I0818 18:58:05.934018   25471 system_pods.go:61] "kube-controller-manager-ha-189125" [97597204-06d9-4bd5-946d-3f429d2f0d35] Running
	I0818 18:58:05.934022   25471 system_pods.go:61] "kube-controller-manager-ha-189125-m02" [1a866408-5605-49f1-b183-a0c438685633] Running
	I0818 18:58:05.934025   25471 system_pods.go:61] "kube-proxy-96xwx" [c3f6dfae-e097-4889-933b-433f1b6b78fe] Running
	I0818 18:58:05.934028   25471 system_pods.go:61] "kube-proxy-scwlr" [03131eab-be49-4cb1-a0a6-1349f0f8eef7] Running
	I0818 18:58:05.934031   25471 system_pods.go:61] "kube-scheduler-ha-189125" [48202e0e-cebc-47fd-b18a-1dc6372caf8a] Running
	I0818 18:58:05.934035   25471 system_pods.go:61] "kube-scheduler-ha-189125-m02" [cc583916-30b6-46a6-ab8a-651f68065443] Running
	I0818 18:58:05.934038   25471 system_pods.go:61] "kube-vip-ha-189125" [0546880a-99fa-4d9a-a754-586b3b7921ee] Running
	I0818 18:58:05.934041   25471 system_pods.go:61] "kube-vip-ha-189125-m02" [ad04a007-45f2-4a01-97e3-202fa39a028a] Running
	I0818 18:58:05.934044   25471 system_pods.go:61] "storage-provisioner" [35b948dd-9b74-4f76-9cdb-82e0901fc421] Running
	I0818 18:58:05.934049   25471 system_pods.go:74] duration metric: took 183.626614ms to wait for pod list to return data ...
	I0818 18:58:05.934059   25471 default_sa.go:34] waiting for default service account to be created ...
	I0818 18:58:06.125476   25471 request.go:632] Waited for 191.346767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/default/serviceaccounts
	I0818 18:58:06.125538   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/default/serviceaccounts
	I0818 18:58:06.125544   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:06.125554   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:06.125559   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:06.129209   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:06.129479   25471 default_sa.go:45] found service account: "default"
	I0818 18:58:06.129501   25471 default_sa.go:55] duration metric: took 195.435484ms for default service account to be created ...
	I0818 18:58:06.129512   25471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 18:58:06.324965   25471 request.go:632] Waited for 195.377711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:06.325036   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:58:06.325041   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:06.325048   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:06.325052   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:06.329381   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:06.334419   25471 system_pods.go:86] 17 kube-system pods found
	I0818 18:58:06.334446   25471 system_pods.go:89] "coredns-6f6b679f8f-7xr26" [d4354313-0e2d-4d96-9cd1-a8f69a4aee26] Running
	I0818 18:58:06.334451   25471 system_pods.go:89] "coredns-6f6b679f8f-q9j97" [1f1c0597-6624-4a3e-8356-7d23555c2809] Running
	I0818 18:58:06.334457   25471 system_pods.go:89] "etcd-ha-189125" [441d8b87-bb19-479f-86a3-eda66e820a81] Running
	I0818 18:58:06.334460   25471 system_pods.go:89] "etcd-ha-189125-m02" [b656f93e-ece8-41c0-b109-584cf52e7b64] Running
	I0818 18:58:06.334464   25471 system_pods.go:89] "kindnet-jwxjh" [086477c9-e6eb-403e-adc7-b15347918484] Running
	I0818 18:58:06.334467   25471 system_pods.go:89] "kindnet-qhnpv" [b23c4910-6e34-46ec-98f2-60ec7ebdd064] Running
	I0818 18:58:06.334471   25471 system_pods.go:89] "kube-apiserver-ha-189125" [707fe85b-0545-4306-aa6f-22580ddb6203] Running
	I0818 18:58:06.334474   25471 system_pods.go:89] "kube-apiserver-ha-189125-m02" [91926546-4ebb-4e81-a0eb-ffaff8d05fdc] Running
	I0818 18:58:06.334478   25471 system_pods.go:89] "kube-controller-manager-ha-189125" [97597204-06d9-4bd5-946d-3f429d2f0d35] Running
	I0818 18:58:06.334482   25471 system_pods.go:89] "kube-controller-manager-ha-189125-m02" [1a866408-5605-49f1-b183-a0c438685633] Running
	I0818 18:58:06.334487   25471 system_pods.go:89] "kube-proxy-96xwx" [c3f6dfae-e097-4889-933b-433f1b6b78fe] Running
	I0818 18:58:06.334492   25471 system_pods.go:89] "kube-proxy-scwlr" [03131eab-be49-4cb1-a0a6-1349f0f8eef7] Running
	I0818 18:58:06.334496   25471 system_pods.go:89] "kube-scheduler-ha-189125" [48202e0e-cebc-47fd-b18a-1dc6372caf8a] Running
	I0818 18:58:06.334499   25471 system_pods.go:89] "kube-scheduler-ha-189125-m02" [cc583916-30b6-46a6-ab8a-651f68065443] Running
	I0818 18:58:06.334502   25471 system_pods.go:89] "kube-vip-ha-189125" [0546880a-99fa-4d9a-a754-586b3b7921ee] Running
	I0818 18:58:06.334505   25471 system_pods.go:89] "kube-vip-ha-189125-m02" [ad04a007-45f2-4a01-97e3-202fa39a028a] Running
	I0818 18:58:06.334508   25471 system_pods.go:89] "storage-provisioner" [35b948dd-9b74-4f76-9cdb-82e0901fc421] Running
	I0818 18:58:06.334513   25471 system_pods.go:126] duration metric: took 204.991892ms to wait for k8s-apps to be running ...
	I0818 18:58:06.334520   25471 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 18:58:06.334561   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:58:06.349147   25471 system_svc.go:56] duration metric: took 14.617419ms WaitForService to wait for kubelet
	I0818 18:58:06.349186   25471 kubeadm.go:582] duration metric: took 22.61678389s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:58:06.349210   25471 node_conditions.go:102] verifying NodePressure condition ...
	I0818 18:58:06.524534   25471 request.go:632] Waited for 175.252959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes
	I0818 18:58:06.524591   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes
	I0818 18:58:06.524610   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:06.524618   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:06.524622   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:06.528253   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:58:06.529126   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:58:06.529151   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:58:06.529164   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:58:06.529169   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:58:06.529175   25471 node_conditions.go:105] duration metric: took 179.959806ms to run NodePressure ...
	I0818 18:58:06.529195   25471 start.go:241] waiting for startup goroutines ...
	I0818 18:58:06.529225   25471 start.go:255] writing updated cluster config ...
	I0818 18:58:06.531778   25471 out.go:201] 
	I0818 18:58:06.533765   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:58:06.533895   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:58:06.535954   25471 out.go:177] * Starting "ha-189125-m03" control-plane node in "ha-189125" cluster
	I0818 18:58:06.537589   25471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:58:06.537616   25471 cache.go:56] Caching tarball of preloaded images
	I0818 18:58:06.537730   25471 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 18:58:06.537745   25471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 18:58:06.537887   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:58:06.538151   25471 start.go:360] acquireMachinesLock for ha-189125-m03: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 18:58:06.538215   25471 start.go:364] duration metric: took 39.455µs to acquireMachinesLock for "ha-189125-m03"
	I0818 18:58:06.538240   25471 start.go:93] Provisioning new machine with config: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:58:06.538374   25471 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0818 18:58:06.540116   25471 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 18:58:06.540221   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:58:06.540264   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:58:06.556326   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44623
	I0818 18:58:06.556846   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:58:06.557368   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:58:06.557404   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:58:06.557678   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:58:06.557843   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetMachineName
	I0818 18:58:06.557989   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:06.558146   25471 start.go:159] libmachine.API.Create for "ha-189125" (driver="kvm2")
	I0818 18:58:06.558176   25471 client.go:168] LocalClient.Create starting
	I0818 18:58:06.558212   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 18:58:06.558253   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:58:06.558273   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:58:06.558334   25471 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 18:58:06.558359   25471 main.go:141] libmachine: Decoding PEM data...
	I0818 18:58:06.558384   25471 main.go:141] libmachine: Parsing certificate...
	I0818 18:58:06.558409   25471 main.go:141] libmachine: Running pre-create checks...
	I0818 18:58:06.558420   25471 main.go:141] libmachine: (ha-189125-m03) Calling .PreCreateCheck
	I0818 18:58:06.558593   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetConfigRaw
	I0818 18:58:06.558983   25471 main.go:141] libmachine: Creating machine...
	I0818 18:58:06.558999   25471 main.go:141] libmachine: (ha-189125-m03) Calling .Create
	I0818 18:58:06.559098   25471 main.go:141] libmachine: (ha-189125-m03) Creating KVM machine...
	I0818 18:58:06.560323   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found existing default KVM network
	I0818 18:58:06.560408   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found existing private KVM network mk-ha-189125
	I0818 18:58:06.560602   25471 main.go:141] libmachine: (ha-189125-m03) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03 ...
	I0818 18:58:06.560626   25471 main.go:141] libmachine: (ha-189125-m03) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:58:06.560723   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:06.560598   26431 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:58:06.560772   25471 main.go:141] libmachine: (ha-189125-m03) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 18:58:06.794237   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:06.794110   26431 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa...
	I0818 18:58:06.891457   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:06.891293   26431 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/ha-189125-m03.rawdisk...
	I0818 18:58:06.891488   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Writing magic tar header
	I0818 18:58:06.891514   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Writing SSH key tar header
	I0818 18:58:06.891530   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:06.891449   26431 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03 ...
	I0818 18:58:06.891547   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03
	I0818 18:58:06.891614   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03 (perms=drwx------)
	I0818 18:58:06.891642   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 18:58:06.891657   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 18:58:06.891672   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 18:58:06.891684   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:58:06.891700   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 18:58:06.891714   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 18:58:06.891728   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 18:58:06.891746   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home/jenkins
	I0818 18:58:06.891760   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 18:58:06.891775   25471 main.go:141] libmachine: (ha-189125-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 18:58:06.891784   25471 main.go:141] libmachine: (ha-189125-m03) Creating domain...
	I0818 18:58:06.891796   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Checking permissions on dir: /home
	I0818 18:58:06.891813   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Skipping /home - not owner
	I0818 18:58:06.892722   25471 main.go:141] libmachine: (ha-189125-m03) define libvirt domain using xml: 
	I0818 18:58:06.892737   25471 main.go:141] libmachine: (ha-189125-m03) <domain type='kvm'>
	I0818 18:58:06.892746   25471 main.go:141] libmachine: (ha-189125-m03)   <name>ha-189125-m03</name>
	I0818 18:58:06.892753   25471 main.go:141] libmachine: (ha-189125-m03)   <memory unit='MiB'>2200</memory>
	I0818 18:58:06.892761   25471 main.go:141] libmachine: (ha-189125-m03)   <vcpu>2</vcpu>
	I0818 18:58:06.892766   25471 main.go:141] libmachine: (ha-189125-m03)   <features>
	I0818 18:58:06.892775   25471 main.go:141] libmachine: (ha-189125-m03)     <acpi/>
	I0818 18:58:06.892782   25471 main.go:141] libmachine: (ha-189125-m03)     <apic/>
	I0818 18:58:06.892792   25471 main.go:141] libmachine: (ha-189125-m03)     <pae/>
	I0818 18:58:06.892802   25471 main.go:141] libmachine: (ha-189125-m03)     
	I0818 18:58:06.892812   25471 main.go:141] libmachine: (ha-189125-m03)   </features>
	I0818 18:58:06.892824   25471 main.go:141] libmachine: (ha-189125-m03)   <cpu mode='host-passthrough'>
	I0818 18:58:06.892835   25471 main.go:141] libmachine: (ha-189125-m03)   
	I0818 18:58:06.892846   25471 main.go:141] libmachine: (ha-189125-m03)   </cpu>
	I0818 18:58:06.892858   25471 main.go:141] libmachine: (ha-189125-m03)   <os>
	I0818 18:58:06.892869   25471 main.go:141] libmachine: (ha-189125-m03)     <type>hvm</type>
	I0818 18:58:06.892880   25471 main.go:141] libmachine: (ha-189125-m03)     <boot dev='cdrom'/>
	I0818 18:58:06.892890   25471 main.go:141] libmachine: (ha-189125-m03)     <boot dev='hd'/>
	I0818 18:58:06.892899   25471 main.go:141] libmachine: (ha-189125-m03)     <bootmenu enable='no'/>
	I0818 18:58:06.892913   25471 main.go:141] libmachine: (ha-189125-m03)   </os>
	I0818 18:58:06.892926   25471 main.go:141] libmachine: (ha-189125-m03)   <devices>
	I0818 18:58:06.892937   25471 main.go:141] libmachine: (ha-189125-m03)     <disk type='file' device='cdrom'>
	I0818 18:58:06.892956   25471 main.go:141] libmachine: (ha-189125-m03)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/boot2docker.iso'/>
	I0818 18:58:06.892967   25471 main.go:141] libmachine: (ha-189125-m03)       <target dev='hdc' bus='scsi'/>
	I0818 18:58:06.892979   25471 main.go:141] libmachine: (ha-189125-m03)       <readonly/>
	I0818 18:58:06.892991   25471 main.go:141] libmachine: (ha-189125-m03)     </disk>
	I0818 18:58:06.893001   25471 main.go:141] libmachine: (ha-189125-m03)     <disk type='file' device='disk'>
	I0818 18:58:06.893010   25471 main.go:141] libmachine: (ha-189125-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 18:58:06.893019   25471 main.go:141] libmachine: (ha-189125-m03)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/ha-189125-m03.rawdisk'/>
	I0818 18:58:06.893027   25471 main.go:141] libmachine: (ha-189125-m03)       <target dev='hda' bus='virtio'/>
	I0818 18:58:06.893032   25471 main.go:141] libmachine: (ha-189125-m03)     </disk>
	I0818 18:58:06.893041   25471 main.go:141] libmachine: (ha-189125-m03)     <interface type='network'>
	I0818 18:58:06.893046   25471 main.go:141] libmachine: (ha-189125-m03)       <source network='mk-ha-189125'/>
	I0818 18:58:06.893051   25471 main.go:141] libmachine: (ha-189125-m03)       <model type='virtio'/>
	I0818 18:58:06.893059   25471 main.go:141] libmachine: (ha-189125-m03)     </interface>
	I0818 18:58:06.893065   25471 main.go:141] libmachine: (ha-189125-m03)     <interface type='network'>
	I0818 18:58:06.893077   25471 main.go:141] libmachine: (ha-189125-m03)       <source network='default'/>
	I0818 18:58:06.893088   25471 main.go:141] libmachine: (ha-189125-m03)       <model type='virtio'/>
	I0818 18:58:06.893100   25471 main.go:141] libmachine: (ha-189125-m03)     </interface>
	I0818 18:58:06.893110   25471 main.go:141] libmachine: (ha-189125-m03)     <serial type='pty'>
	I0818 18:58:06.893118   25471 main.go:141] libmachine: (ha-189125-m03)       <target port='0'/>
	I0818 18:58:06.893123   25471 main.go:141] libmachine: (ha-189125-m03)     </serial>
	I0818 18:58:06.893130   25471 main.go:141] libmachine: (ha-189125-m03)     <console type='pty'>
	I0818 18:58:06.893138   25471 main.go:141] libmachine: (ha-189125-m03)       <target type='serial' port='0'/>
	I0818 18:58:06.893143   25471 main.go:141] libmachine: (ha-189125-m03)     </console>
	I0818 18:58:06.893166   25471 main.go:141] libmachine: (ha-189125-m03)     <rng model='virtio'>
	I0818 18:58:06.893180   25471 main.go:141] libmachine: (ha-189125-m03)       <backend model='random'>/dev/random</backend>
	I0818 18:58:06.893190   25471 main.go:141] libmachine: (ha-189125-m03)     </rng>
	I0818 18:58:06.893200   25471 main.go:141] libmachine: (ha-189125-m03)     
	I0818 18:58:06.893207   25471 main.go:141] libmachine: (ha-189125-m03)     
	I0818 18:58:06.893217   25471 main.go:141] libmachine: (ha-189125-m03)   </devices>
	I0818 18:58:06.893225   25471 main.go:141] libmachine: (ha-189125-m03) </domain>
	I0818 18:58:06.893231   25471 main.go:141] libmachine: (ha-189125-m03) 
	I0818 18:58:06.901511   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:ad:03:e4 in network default
	I0818 18:58:06.902086   25471 main.go:141] libmachine: (ha-189125-m03) Ensuring networks are active...
	I0818 18:58:06.902129   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:06.903085   25471 main.go:141] libmachine: (ha-189125-m03) Ensuring network default is active
	I0818 18:58:06.903554   25471 main.go:141] libmachine: (ha-189125-m03) Ensuring network mk-ha-189125 is active
	I0818 18:58:06.903905   25471 main.go:141] libmachine: (ha-189125-m03) Getting domain xml...
	I0818 18:58:06.904891   25471 main.go:141] libmachine: (ha-189125-m03) Creating domain...
	I0818 18:58:08.152868   25471 main.go:141] libmachine: (ha-189125-m03) Waiting to get IP...
	I0818 18:58:08.153689   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:08.154064   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:08.154122   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:08.154055   26431 retry.go:31] will retry after 268.490085ms: waiting for machine to come up
	I0818 18:58:08.424531   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:08.425036   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:08.425065   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:08.424979   26431 retry.go:31] will retry after 316.367894ms: waiting for machine to come up
	I0818 18:58:08.742560   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:08.743048   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:08.743069   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:08.743020   26431 retry.go:31] will retry after 371.13386ms: waiting for machine to come up
	I0818 18:58:09.115801   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:09.116351   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:09.116396   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:09.116284   26431 retry.go:31] will retry after 397.759321ms: waiting for machine to come up
	I0818 18:58:09.515854   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:09.516285   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:09.516316   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:09.516238   26431 retry.go:31] will retry after 578.790648ms: waiting for machine to come up
	I0818 18:58:10.097094   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:10.097525   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:10.097551   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:10.097469   26431 retry.go:31] will retry after 721.378969ms: waiting for machine to come up
	I0818 18:58:10.820162   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:10.820625   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:10.820653   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:10.820524   26431 retry.go:31] will retry after 1.086370836s: waiting for machine to come up
	I0818 18:58:11.908115   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:11.908506   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:11.908533   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:11.908493   26431 retry.go:31] will retry after 1.087510486s: waiting for machine to come up
	I0818 18:58:12.997612   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:12.998073   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:12.998106   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:12.998005   26431 retry.go:31] will retry after 1.209672816s: waiting for machine to come up
	I0818 18:58:14.209366   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:14.209806   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:14.209833   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:14.209757   26431 retry.go:31] will retry after 1.547070722s: waiting for machine to come up
	I0818 18:58:15.759631   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:15.760118   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:15.760146   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:15.760096   26431 retry.go:31] will retry after 2.328434742s: waiting for machine to come up
	I0818 18:58:18.091165   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:18.091673   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:18.091700   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:18.091630   26431 retry.go:31] will retry after 3.093157403s: waiting for machine to come up
	I0818 18:58:21.188443   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:21.188880   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:21.188904   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:21.188824   26431 retry.go:31] will retry after 4.344973301s: waiting for machine to come up
	I0818 18:58:25.536417   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:25.536845   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find current IP address of domain ha-189125-m03 in network mk-ha-189125
	I0818 18:58:25.536872   25471 main.go:141] libmachine: (ha-189125-m03) DBG | I0818 18:58:25.536798   26431 retry.go:31] will retry after 4.579228582s: waiting for machine to come up
	I0818 18:58:30.120729   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.120845   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has current primary IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.120862   25471 main.go:141] libmachine: (ha-189125-m03) Found IP for machine: 192.168.39.170
	I0818 18:58:30.120888   25471 main.go:141] libmachine: (ha-189125-m03) Reserving static IP address...
	I0818 18:58:30.121350   25471 main.go:141] libmachine: (ha-189125-m03) DBG | unable to find host DHCP lease matching {name: "ha-189125-m03", mac: "52:54:00:df:db:3a", ip: "192.168.39.170"} in network mk-ha-189125
	I0818 18:58:30.195549   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Getting to WaitForSSH function...
	I0818 18:58:30.195577   25471 main.go:141] libmachine: (ha-189125-m03) Reserved static IP address: 192.168.39.170
	I0818 18:58:30.195589   25471 main.go:141] libmachine: (ha-189125-m03) Waiting for SSH to be available...
	I0818 18:58:30.199159   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.199865   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.199895   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.200103   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Using SSH client type: external
	I0818 18:58:30.200141   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa (-rw-------)
	I0818 18:58:30.200171   25471 main.go:141] libmachine: (ha-189125-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 18:58:30.200192   25471 main.go:141] libmachine: (ha-189125-m03) DBG | About to run SSH command:
	I0818 18:58:30.200207   25471 main.go:141] libmachine: (ha-189125-m03) DBG | exit 0
	I0818 18:58:30.335735   25471 main.go:141] libmachine: (ha-189125-m03) DBG | SSH cmd err, output: <nil>: 
	I0818 18:58:30.335930   25471 main.go:141] libmachine: (ha-189125-m03) KVM machine creation complete!
	I0818 18:58:30.336254   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetConfigRaw
	I0818 18:58:30.336815   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:30.337015   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:30.337157   25471 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 18:58:30.337169   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 18:58:30.338391   25471 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 18:58:30.338407   25471 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 18:58:30.338416   25471 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 18:58:30.338423   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.340512   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.340848   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.340875   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.341030   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.341194   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.341363   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.341507   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.341669   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:30.341934   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:30.341947   25471 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 18:58:30.454732   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:58:30.454760   25471 main.go:141] libmachine: Detecting the provisioner...
	I0818 18:58:30.454771   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.457654   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.458020   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.458051   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.458166   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.458365   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.458543   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.458682   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.458850   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:30.459053   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:30.459067   25471 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 18:58:30.572018   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 18:58:30.572095   25471 main.go:141] libmachine: found compatible host: buildroot
	I0818 18:58:30.572108   25471 main.go:141] libmachine: Provisioning with buildroot...
	I0818 18:58:30.572124   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetMachineName
	I0818 18:58:30.572363   25471 buildroot.go:166] provisioning hostname "ha-189125-m03"
	I0818 18:58:30.572397   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetMachineName
	I0818 18:58:30.572552   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.575238   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.575618   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.575646   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.575812   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.575983   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.576145   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.576274   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.576408   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:30.576602   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:30.576614   25471 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-189125-m03 && echo "ha-189125-m03" | sudo tee /etc/hostname
	I0818 18:58:30.707730   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125-m03
	
	I0818 18:58:30.707760   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.710383   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.710718   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.710742   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.710940   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.711111   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.711243   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.711352   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.711506   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:30.711666   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:30.711681   25471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-189125-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-189125-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-189125-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:58:30.834309   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:58:30.834343   25471 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 18:58:30.834362   25471 buildroot.go:174] setting up certificates
	I0818 18:58:30.834373   25471 provision.go:84] configureAuth start
	I0818 18:58:30.834386   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetMachineName
	I0818 18:58:30.834651   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 18:58:30.837186   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.837472   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.837505   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.837670   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.840052   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.840424   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.840446   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.840564   25471 provision.go:143] copyHostCerts
	I0818 18:58:30.840588   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:58:30.840619   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 18:58:30.840631   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 18:58:30.840693   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 18:58:30.840773   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:58:30.840793   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 18:58:30.840799   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 18:58:30.840839   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 18:58:30.840891   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:58:30.840916   25471 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 18:58:30.840925   25471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 18:58:30.840957   25471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 18:58:30.841147   25471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.ha-189125-m03 san=[127.0.0.1 192.168.39.170 ha-189125-m03 localhost minikube]
	I0818 18:58:30.904128   25471 provision.go:177] copyRemoteCerts
	I0818 18:58:30.904182   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:58:30.904207   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:30.906881   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.907285   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:30.907312   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:30.907508   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:30.907702   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:30.907863   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:30.907977   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 18:58:30.994118   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 18:58:30.994199   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 18:58:31.020830   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 18:58:31.020916   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 18:58:31.046410   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 18:58:31.046483   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 18:58:31.071787   25471 provision.go:87] duration metric: took 237.40302ms to configureAuth
	I0818 18:58:31.071814   25471 buildroot.go:189] setting minikube options for container-runtime
	I0818 18:58:31.072024   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:58:31.072095   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:31.074367   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.074828   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.074856   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.075151   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.075397   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.075554   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.075687   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.075835   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:31.075988   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:31.076001   25471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 18:58:31.355802   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 18:58:31.355827   25471 main.go:141] libmachine: Checking connection to Docker...
	I0818 18:58:31.355835   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetURL
	I0818 18:58:31.357216   25471 main.go:141] libmachine: (ha-189125-m03) DBG | Using libvirt version 6000000
	I0818 18:58:31.359482   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.359881   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.359906   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.360077   25471 main.go:141] libmachine: Docker is up and running!
	I0818 18:58:31.360099   25471 main.go:141] libmachine: Reticulating splines...
	I0818 18:58:31.360106   25471 client.go:171] duration metric: took 24.801921523s to LocalClient.Create
	I0818 18:58:31.360132   25471 start.go:167] duration metric: took 24.801986295s to libmachine.API.Create "ha-189125"
	I0818 18:58:31.360144   25471 start.go:293] postStartSetup for "ha-189125-m03" (driver="kvm2")
	I0818 18:58:31.360155   25471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:58:31.360176   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.360402   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:58:31.360425   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:31.362382   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.362798   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.362824   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.363003   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.363188   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.363313   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.363486   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 18:58:31.455310   25471 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:58:31.459841   25471 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 18:58:31.459866   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 18:58:31.459944   25471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 18:58:31.460020   25471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 18:58:31.460029   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 18:58:31.460106   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 18:58:31.470112   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:58:31.494095   25471 start.go:296] duration metric: took 133.937124ms for postStartSetup
	I0818 18:58:31.494145   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetConfigRaw
	I0818 18:58:31.494662   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 18:58:31.496929   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.497280   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.497308   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.497538   25471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 18:58:31.497775   25471 start.go:128] duration metric: took 24.959388213s to createHost
	I0818 18:58:31.497799   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:31.500075   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.500410   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.500446   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.500609   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.500806   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.501007   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.501155   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.501310   25471 main.go:141] libmachine: Using SSH client type: native
	I0818 18:58:31.501472   25471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0818 18:58:31.501482   25471 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 18:58:31.616447   25471 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724007511.591856029
	
	I0818 18:58:31.616469   25471 fix.go:216] guest clock: 1724007511.591856029
	I0818 18:58:31.616477   25471 fix.go:229] Guest: 2024-08-18 18:58:31.591856029 +0000 UTC Remote: 2024-08-18 18:58:31.497787799 +0000 UTC m=+194.702784877 (delta=94.06823ms)
	I0818 18:58:31.616492   25471 fix.go:200] guest clock delta is within tolerance: 94.06823ms
	I0818 18:58:31.616499   25471 start.go:83] releasing machines lock for "ha-189125-m03", held for 25.078270959s
	I0818 18:58:31.616519   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.616743   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 18:58:31.619040   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.619414   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.619457   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.621834   25471 out.go:177] * Found network options:
	I0818 18:58:31.623264   25471 out.go:177]   - NO_PROXY=192.168.39.49,192.168.39.147
	W0818 18:58:31.624565   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 18:58:31.624590   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 18:58:31.624602   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.625157   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.625369   25471 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 18:58:31.625466   25471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:58:31.625499   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	W0818 18:58:31.625588   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	W0818 18:58:31.625613   25471 proxy.go:119] fail to check proxy env: Error ip not in block
	I0818 18:58:31.625676   25471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 18:58:31.625698   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 18:58:31.628154   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.628524   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.628550   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.628600   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.628696   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.628859   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.628995   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:31.629014   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:31.629018   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.629155   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 18:58:31.629191   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 18:58:31.629330   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 18:58:31.629451   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 18:58:31.629608   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 18:58:31.876456   25471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 18:58:31.882444   25471 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 18:58:31.882510   25471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:58:31.899344   25471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 18:58:31.899370   25471 start.go:495] detecting cgroup driver to use...
	I0818 18:58:31.899444   25471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 18:58:31.916882   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:58:31.931105   25471 docker.go:217] disabling cri-docker service (if available) ...
	I0818 18:58:31.931154   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 18:58:31.947568   25471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 18:58:31.961682   25471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 18:58:32.087953   25471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 18:58:32.234741   25471 docker.go:233] disabling docker service ...
	I0818 18:58:32.234800   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 18:58:32.249234   25471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 18:58:32.264814   25471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 18:58:32.414355   25471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 18:58:32.534121   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 18:58:32.548870   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:58:32.567567   25471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 18:58:32.567637   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.578656   25471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 18:58:32.578731   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.589401   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.600042   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.614032   25471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:58:32.627923   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.641750   25471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.661479   25471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 18:58:32.673027   25471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:58:32.683174   25471 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 18:58:32.683230   25471 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 18:58:32.696444   25471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:58:32.706515   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:58:32.830513   25471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 18:58:32.978738   25471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 18:58:32.978817   25471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 18:58:32.983525   25471 start.go:563] Will wait 60s for crictl version
	I0818 18:58:32.983587   25471 ssh_runner.go:195] Run: which crictl
	I0818 18:58:32.987190   25471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:58:33.031555   25471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 18:58:33.031636   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:58:33.065888   25471 ssh_runner.go:195] Run: crio --version
	I0818 18:58:33.098732   25471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 18:58:33.100160   25471 out.go:177]   - env NO_PROXY=192.168.39.49
	I0818 18:58:33.101438   25471 out.go:177]   - env NO_PROXY=192.168.39.49,192.168.39.147
	I0818 18:58:33.102607   25471 main.go:141] libmachine: (ha-189125-m03) Calling .GetIP
	I0818 18:58:33.105330   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:33.105644   25471 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 18:58:33.105669   25471 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 18:58:33.105878   25471 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 18:58:33.110328   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:58:33.123156   25471 mustload.go:65] Loading cluster: ha-189125
	I0818 18:58:33.123440   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:58:33.123746   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:58:33.123791   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:58:33.139743   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I0818 18:58:33.140162   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:58:33.140686   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:58:33.140707   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:58:33.140989   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:58:33.141179   25471 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 18:58:33.142679   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:58:33.142947   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:58:33.142978   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:58:33.156852   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39165
	I0818 18:58:33.157260   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:58:33.157661   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:58:33.157679   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:58:33.157939   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:58:33.158063   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:58:33.158232   25471 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125 for IP: 192.168.39.170
	I0818 18:58:33.158242   25471 certs.go:194] generating shared ca certs ...
	I0818 18:58:33.158256   25471 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:58:33.158398   25471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 18:58:33.158454   25471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 18:58:33.158466   25471 certs.go:256] generating profile certs ...
	I0818 18:58:33.158557   25471 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key
	I0818 18:58:33.158587   25471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ed2123f4
	I0818 18:58:33.158607   25471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ed2123f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.49 192.168.39.147 192.168.39.170 192.168.39.254]
	I0818 18:58:33.272120   25471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ed2123f4 ...
	I0818 18:58:33.272147   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ed2123f4: {Name:mkeed75f0c4d827541cbfb95863e2cd154b9d88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:58:33.272346   25471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ed2123f4 ...
	I0818 18:58:33.272363   25471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ed2123f4: {Name:mkf3adaf9587675fabd0a13e2c88f3c36ecccf12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:58:33.272460   25471 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ed2123f4 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt
	I0818 18:58:33.272617   25471 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ed2123f4 -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key
	I0818 18:58:33.272783   25471 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key
	I0818 18:58:33.272803   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 18:58:33.272824   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 18:58:33.272843   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 18:58:33.272861   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 18:58:33.272877   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 18:58:33.272893   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 18:58:33.272910   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 18:58:33.272928   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 18:58:33.272989   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 18:58:33.273026   25471 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 18:58:33.273038   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 18:58:33.273073   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 18:58:33.273102   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:58:33.273134   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 18:58:33.273188   25471 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 18:58:33.273228   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 18:58:33.273248   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:58:33.273268   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 18:58:33.273310   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:58:33.276139   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:58:33.276588   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:58:33.276611   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:58:33.276790   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:58:33.276977   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:58:33.277136   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:58:33.277316   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:58:33.347784   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0818 18:58:33.352946   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0818 18:58:33.366917   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0818 18:58:33.371451   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0818 18:58:33.381894   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0818 18:58:33.386039   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0818 18:58:33.396178   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0818 18:58:33.400809   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0818 18:58:33.411811   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0818 18:58:33.416558   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0818 18:58:33.427317   25471 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0818 18:58:33.431864   25471 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0818 18:58:33.442548   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:58:33.467101   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 18:58:33.490738   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:58:33.514519   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 18:58:33.538233   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0818 18:58:33.562633   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 18:58:33.585453   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:58:33.608469   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 18:58:33.632275   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 18:58:33.655374   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:58:33.678478   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 18:58:33.701922   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0818 18:58:33.717767   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0818 18:58:33.733671   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0818 18:58:33.750087   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0818 18:58:33.766511   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0818 18:58:33.783020   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0818 18:58:33.800197   25471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0818 18:58:33.817708   25471 ssh_runner.go:195] Run: openssl version
	I0818 18:58:33.823459   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 18:58:33.834484   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 18:58:33.838909   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 18:58:33.838964   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 18:58:33.844921   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 18:58:33.856075   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 18:58:33.869016   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 18:58:33.873465   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 18:58:33.873530   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 18:58:33.879132   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 18:58:33.890574   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:58:33.901547   25471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:58:33.906147   25471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:58:33.906195   25471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:58:33.911951   25471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:58:33.923248   25471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:58:33.927548   25471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 18:58:33.927603   25471 kubeadm.go:934] updating node {m03 192.168.39.170 8443 v1.31.0 crio true true} ...
	I0818 18:58:33.927694   25471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-189125-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:58:33.927727   25471 kube-vip.go:115] generating kube-vip config ...
	I0818 18:58:33.927764   25471 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 18:58:33.945838   25471 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 18:58:33.945905   25471 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 18:58:33.945973   25471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:58:33.956467   25471 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0818 18:58:33.956529   25471 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0818 18:58:33.966885   25471 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0818 18:58:33.966905   25471 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0818 18:58:33.966921   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0818 18:58:33.966893   25471 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0818 18:58:33.966935   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:58:33.966946   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0818 18:58:33.966998   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0818 18:58:33.967008   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0818 18:58:33.984099   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0818 18:58:33.984137   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0818 18:58:33.984162   25471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0818 18:58:33.984162   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0818 18:58:33.984206   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0818 18:58:33.984259   25471 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0818 18:58:34.010439   25471 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0818 18:58:34.010474   25471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0818 18:58:34.847912   25471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0818 18:58:34.858200   25471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0818 18:58:34.877745   25471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:58:34.896957   25471 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0818 18:58:34.914953   25471 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0818 18:58:34.919267   25471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:58:34.932031   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:58:35.057593   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:58:35.074714   25471 host.go:66] Checking if "ha-189125" exists ...
	I0818 18:58:35.075157   25471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:58:35.075208   25471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:58:35.091741   25471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41985
	I0818 18:58:35.092123   25471 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:58:35.092638   25471 main.go:141] libmachine: Using API Version  1
	I0818 18:58:35.092663   25471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:58:35.093017   25471 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:58:35.093204   25471 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 18:58:35.093346   25471 start.go:317] joinCluster: &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:58:35.093478   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0818 18:58:35.093501   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 18:58:35.096426   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:58:35.096858   25471 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 18:58:35.096881   25471 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 18:58:35.097056   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 18:58:35.097230   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 18:58:35.097359   25471 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 18:58:35.097457   25471 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 18:58:35.242678   25471 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:58:35.242733   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pfh3i3.msd5m9hr91q8t3xk --discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-189125-m03 --control-plane --apiserver-advertise-address=192.168.39.170 --apiserver-bind-port=8443"
	I0818 18:58:58.086701   25471 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pfh3i3.msd5m9hr91q8t3xk --discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-189125-m03 --control-plane --apiserver-advertise-address=192.168.39.170 --apiserver-bind-port=8443": (22.843941512s)
	I0818 18:58:58.086735   25471 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0818 18:58:58.612445   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-189125-m03 minikube.k8s.io/updated_at=2024_08_18T18_58_58_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=ha-189125 minikube.k8s.io/primary=false
	I0818 18:58:58.749208   25471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-189125-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0818 18:58:58.875477   25471 start.go:319] duration metric: took 23.782126807s to joinCluster
	I0818 18:58:58.875549   25471 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 18:58:58.875909   25471 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:58:58.876983   25471 out.go:177] * Verifying Kubernetes components...
	I0818 18:58:58.878171   25471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:58:59.154035   25471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:58:59.172953   25471 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:58:59.173200   25471 kapi.go:59] client config for ha-189125: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.crt", KeyFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key", CAFile:"/home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0818 18:58:59.173255   25471 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.49:8443
	I0818 18:58:59.173483   25471 node_ready.go:35] waiting up to 6m0s for node "ha-189125-m03" to be "Ready" ...
	I0818 18:58:59.173569   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:58:59.173577   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:59.173585   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:59.173591   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:59.178424   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:58:59.674723   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:58:59.674750   25471 round_trippers.go:469] Request Headers:
	I0818 18:58:59.674760   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:58:59.674767   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:58:59.678690   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:00.174129   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:00.174154   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:00.174161   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:00.174165   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:00.177450   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:00.674473   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:00.674500   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:00.674512   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:00.674518   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:00.677814   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:01.173685   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:01.173711   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:01.173723   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:01.173728   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:01.180323   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:01.180865   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:01.674488   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:01.674505   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:01.674512   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:01.674517   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:01.678917   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:02.174653   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:02.174675   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:02.174683   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:02.174687   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:02.177992   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:02.674257   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:02.674279   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:02.674289   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:02.674297   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:02.677513   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:03.173931   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:03.173951   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:03.173960   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:03.173965   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:03.177328   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:03.674447   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:03.674467   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:03.674475   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:03.674479   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:03.682440   25471 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0818 18:59:03.683040   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:04.174290   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:04.174312   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:04.174320   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:04.174325   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:04.177866   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:04.673718   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:04.673747   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:04.673759   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:04.673764   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:04.678150   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:05.174737   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:05.174767   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:05.174778   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:05.174786   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:05.178667   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:05.674455   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:05.674478   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:05.674489   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:05.674493   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:05.677845   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:06.173993   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:06.174014   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:06.174023   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:06.174028   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:06.177530   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:06.178175   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:06.673920   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:06.673941   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:06.673947   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:06.673952   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:06.677189   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:07.173837   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:07.173860   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:07.173867   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:07.173871   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:07.177228   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:07.674588   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:07.674616   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:07.674628   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:07.674633   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:07.678248   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:08.174659   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:08.174679   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:08.174688   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:08.174691   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:08.178126   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:08.178982   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:08.674418   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:08.674439   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:08.674447   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:08.674451   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:08.677884   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:09.173981   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:09.174006   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:09.174014   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:09.174022   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:09.177121   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:09.673871   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:09.673892   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:09.673900   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:09.673904   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:09.677228   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:10.174694   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:10.174720   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:10.174731   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:10.174740   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:10.178221   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:10.674236   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:10.674255   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:10.674263   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:10.674267   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:10.678799   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:10.679351   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:11.173724   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:11.173746   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:11.173753   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:11.173757   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:11.177123   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:11.674212   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:11.674233   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:11.674242   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:11.674247   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:11.677388   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:12.174158   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:12.174180   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:12.174186   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:12.174189   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:12.177934   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:12.674195   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:12.674220   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:12.674228   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:12.674233   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:12.678167   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:13.174434   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:13.174454   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:13.174464   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:13.174471   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:13.178435   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:13.179175   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:13.674545   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:13.674564   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:13.674573   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:13.674578   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:13.678002   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:14.173921   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:14.173943   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:14.173951   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:14.173955   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:14.176924   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:14.674549   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:14.674572   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:14.674582   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:14.674590   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:14.678491   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:15.173905   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:15.173930   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:15.173940   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:15.173947   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:15.177556   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:15.674219   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:15.674253   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:15.674263   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:15.674270   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:15.677109   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:15.677760   25471 node_ready.go:53] node "ha-189125-m03" has status "Ready":"False"
	I0818 18:59:16.173818   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:16.173839   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.173848   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.173853   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.177645   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:16.674199   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:16.674221   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.674229   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.674232   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.677227   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.677904   25471 node_ready.go:49] node "ha-189125-m03" has status "Ready":"True"
	I0818 18:59:16.677921   25471 node_ready.go:38] duration metric: took 17.504421939s for node "ha-189125-m03" to be "Ready" ...
	I0818 18:59:16.677932   25471 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:59:16.678010   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:16.678021   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.678032   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.678038   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.684399   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:16.690781   25471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.690854   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-7xr26
	I0818 18:59:16.690864   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.690871   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.690881   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.693466   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.694169   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:16.694189   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.694196   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.694200   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.696615   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.697128   25471 pod_ready.go:93] pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:16.697144   25471 pod_ready.go:82] duration metric: took 6.341179ms for pod "coredns-6f6b679f8f-7xr26" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.697165   25471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.697213   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-q9j97
	I0818 18:59:16.697222   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.697232   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.697239   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.699879   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.700461   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:16.700476   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.700486   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.700492   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.703993   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:16.704464   25471 pod_ready.go:93] pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:16.704479   25471 pod_ready.go:82] duration metric: took 7.306351ms for pod "coredns-6f6b679f8f-q9j97" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.704488   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.704543   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125
	I0818 18:59:16.704551   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.704558   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.704562   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.711062   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:16.711643   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:16.711666   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.711676   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.711682   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.714117   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.714620   25471 pod_ready.go:93] pod "etcd-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:16.714637   25471 pod_ready.go:82] duration metric: took 10.14269ms for pod "etcd-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.714648   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.714700   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125-m02
	I0818 18:59:16.714710   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.714719   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.714727   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.717370   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.718114   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:16.718126   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.718136   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.718140   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.720601   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:16.721175   25471 pod_ready.go:93] pod "etcd-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:16.721199   25471 pod_ready.go:82] duration metric: took 6.534639ms for pod "etcd-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.721211   25471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:16.874560   25471 request.go:632] Waited for 153.286293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125-m03
	I0818 18:59:16.874652   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/etcd-ha-189125-m03
	I0818 18:59:16.874663   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:16.874672   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:16.874680   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:16.878076   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.075120   25471 request.go:632] Waited for 196.23254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:17.075204   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:17.075211   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.075219   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.075228   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.078611   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.079067   25471 pod_ready.go:93] pod "etcd-ha-189125-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:17.079084   25471 pod_ready.go:82] duration metric: took 357.865975ms for pod "etcd-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.079099   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.275205   25471 request.go:632] Waited for 196.036569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125
	I0818 18:59:17.275264   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125
	I0818 18:59:17.275269   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.275279   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.275284   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.278437   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.474562   25471 request.go:632] Waited for 195.375209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:17.474621   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:17.474627   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.474634   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.474638   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.477744   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.478408   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:17.478426   25471 pod_ready.go:82] duration metric: took 399.321932ms for pod "kube-apiserver-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.478435   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.674327   25471 request.go:632] Waited for 195.821614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m02
	I0818 18:59:17.674379   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m02
	I0818 18:59:17.674387   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.674397   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.674405   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.677541   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.874519   25471 request.go:632] Waited for 196.189776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:17.874616   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:17.874624   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:17.874633   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:17.874639   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:17.878226   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:17.879100   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:17.879117   25471 pod_ready.go:82] duration metric: took 400.676092ms for pod "kube-apiserver-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:17.879125   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.074583   25471 request.go:632] Waited for 195.394392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m03
	I0818 18:59:18.074659   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-189125-m03
	I0818 18:59:18.074664   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.074672   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.074678   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.078111   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:18.274617   25471 request.go:632] Waited for 195.740226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:18.274665   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:18.274670   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.274677   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.274681   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.277955   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:18.278582   25471 pod_ready.go:93] pod "kube-apiserver-ha-189125-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:18.278598   25471 pod_ready.go:82] duration metric: took 399.467222ms for pod "kube-apiserver-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.278607   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.474769   25471 request.go:632] Waited for 196.09145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125
	I0818 18:59:18.474819   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125
	I0818 18:59:18.474824   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.474831   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.474836   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.478936   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:18.674303   25471 request.go:632] Waited for 192.656613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:18.674366   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:18.674374   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.674384   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.674396   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.681387   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:18.682195   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:18.682252   25471 pod_ready.go:82] duration metric: took 403.636564ms for pod "kube-controller-manager-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.682269   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:18.874248   25471 request.go:632] Waited for 191.908122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m02
	I0818 18:59:18.874323   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m02
	I0818 18:59:18.874328   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:18.874336   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:18.874348   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:18.877687   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.074828   25471 request.go:632] Waited for 196.356111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:19.074879   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:19.074884   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.074892   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.074896   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.078205   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.078748   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:19.078766   25471 pod_ready.go:82] duration metric: took 396.490052ms for pod "kube-controller-manager-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.078776   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.274725   25471 request.go:632] Waited for 195.892952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m03
	I0818 18:59:19.274816   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-189125-m03
	I0818 18:59:19.274828   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.274839   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.274848   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.278314   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.474318   25471 request.go:632] Waited for 195.307964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:19.474388   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:19.474393   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.474401   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.474406   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.478526   25471 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0818 18:59:19.479040   25471 pod_ready.go:93] pod "kube-controller-manager-ha-189125-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:19.479061   25471 pod_ready.go:82] duration metric: took 400.279756ms for pod "kube-controller-manager-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.479071   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-22f8v" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.675115   25471 request.go:632] Waited for 195.971823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22f8v
	I0818 18:59:19.675214   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-22f8v
	I0818 18:59:19.675223   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.675233   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.675240   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.678741   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.874999   25471 request.go:632] Waited for 195.375691ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:19.875058   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:19.875075   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:19.875082   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:19.875086   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:19.878622   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:19.879349   25471 pod_ready.go:93] pod "kube-proxy-22f8v" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:19.879367   25471 pod_ready.go:82] duration metric: took 400.289102ms for pod "kube-proxy-22f8v" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:19.879397   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-96xwx" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.074522   25471 request.go:632] Waited for 195.044509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-96xwx
	I0818 18:59:20.074589   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-96xwx
	I0818 18:59:20.074594   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.074601   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.074605   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.077810   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:20.274920   25471 request.go:632] Waited for 196.321217ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:20.274997   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:20.275004   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.275016   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.275026   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.278538   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:20.279137   25471 pod_ready.go:93] pod "kube-proxy-96xwx" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:20.279154   25471 pod_ready.go:82] duration metric: took 399.750001ms for pod "kube-proxy-96xwx" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.279165   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-scwlr" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.474246   25471 request.go:632] Waited for 195.025426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scwlr
	I0818 18:59:20.474332   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-proxy-scwlr
	I0818 18:59:20.474339   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.474350   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.474355   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.477715   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:20.675028   25471 request.go:632] Waited for 196.381301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:20.675108   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:20.675117   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.675125   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.675131   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.678645   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:20.679602   25471 pod_ready.go:93] pod "kube-proxy-scwlr" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:20.679621   25471 pod_ready.go:82] duration metric: took 400.448549ms for pod "kube-proxy-scwlr" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.679631   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:20.874611   25471 request.go:632] Waited for 194.912911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125
	I0818 18:59:20.874680   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125
	I0818 18:59:20.874688   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:20.874699   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:20.874720   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:20.877830   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.074973   25471 request.go:632] Waited for 196.353479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:21.075035   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125
	I0818 18:59:21.075042   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.075051   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.075066   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.078025   25471 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0818 18:59:21.078680   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:21.078701   25471 pod_ready.go:82] duration metric: took 399.062562ms for pod "kube-scheduler-ha-189125" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.078710   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.274779   25471 request.go:632] Waited for 196.015279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m02
	I0818 18:59:21.274839   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m02
	I0818 18:59:21.274843   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.274851   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.274860   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.278054   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.475025   25471 request.go:632] Waited for 196.356085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:21.475083   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m02
	I0818 18:59:21.475090   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.475100   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.475110   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.478447   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.478980   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:21.478995   25471 pod_ready.go:82] duration metric: took 400.280156ms for pod "kube-scheduler-ha-189125-m02" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.479005   25471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.674686   25471 request.go:632] Waited for 195.59456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m03
	I0818 18:59:21.674739   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-189125-m03
	I0818 18:59:21.674744   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.674751   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.674757   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.678130   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.874970   25471 request.go:632] Waited for 196.153286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:21.875042   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes/ha-189125-m03
	I0818 18:59:21.875048   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.875055   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.875059   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.878472   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:21.878998   25471 pod_ready.go:93] pod "kube-scheduler-ha-189125-m03" in "kube-system" namespace has status "Ready":"True"
	I0818 18:59:21.879018   25471 pod_ready.go:82] duration metric: took 400.005768ms for pod "kube-scheduler-ha-189125-m03" in "kube-system" namespace to be "Ready" ...
	I0818 18:59:21.879030   25471 pod_ready.go:39] duration metric: took 5.201085905s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:59:21.879047   25471 api_server.go:52] waiting for apiserver process to appear ...
	I0818 18:59:21.879110   25471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:59:21.897264   25471 api_server.go:72] duration metric: took 23.02167788s to wait for apiserver process to appear ...
	I0818 18:59:21.897292   25471 api_server.go:88] waiting for apiserver healthz status ...
	I0818 18:59:21.897313   25471 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I0818 18:59:21.901779   25471 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I0818 18:59:21.901848   25471 round_trippers.go:463] GET https://192.168.39.49:8443/version
	I0818 18:59:21.901859   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:21.901869   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:21.901877   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:21.902704   25471 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0818 18:59:21.902762   25471 api_server.go:141] control plane version: v1.31.0
	I0818 18:59:21.902779   25471 api_server.go:131] duration metric: took 5.47891ms to wait for apiserver health ...
	I0818 18:59:21.902787   25471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 18:59:22.074544   25471 request.go:632] Waited for 171.697311ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:22.074606   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:22.074613   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:22.074624   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:22.074629   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:22.081124   25471 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0818 18:59:22.088910   25471 system_pods.go:59] 24 kube-system pods found
	I0818 18:59:22.088939   25471 system_pods.go:61] "coredns-6f6b679f8f-7xr26" [d4354313-0e2d-4d96-9cd1-a8f69a4aee26] Running
	I0818 18:59:22.088944   25471 system_pods.go:61] "coredns-6f6b679f8f-q9j97" [1f1c0597-6624-4a3e-8356-7d23555c2809] Running
	I0818 18:59:22.088949   25471 system_pods.go:61] "etcd-ha-189125" [441d8b87-bb19-479f-86a3-eda66e820a81] Running
	I0818 18:59:22.088952   25471 system_pods.go:61] "etcd-ha-189125-m02" [b656f93e-ece8-41c0-b109-584cf52e7b64] Running
	I0818 18:59:22.088955   25471 system_pods.go:61] "etcd-ha-189125-m03" [6e53b8eb-e64c-48db-8b5d-cd7c0dca3be5] Running
	I0818 18:59:22.088959   25471 system_pods.go:61] "kindnet-24xql" [ba1034b3-04c9-4c64-8fde-7b45ea42f21c] Running
	I0818 18:59:22.088963   25471 system_pods.go:61] "kindnet-jwxjh" [086477c9-e6eb-403e-adc7-b15347918484] Running
	I0818 18:59:22.088967   25471 system_pods.go:61] "kindnet-qhnpv" [b23c4910-6e34-46ec-98f2-60ec7ebdd064] Running
	I0818 18:59:22.088973   25471 system_pods.go:61] "kube-apiserver-ha-189125" [707fe85b-0545-4306-aa6f-22580ddb6203] Running
	I0818 18:59:22.088977   25471 system_pods.go:61] "kube-apiserver-ha-189125-m02" [91926546-4ebb-4e81-a0eb-ffaff8d05fdc] Running
	I0818 18:59:22.088982   25471 system_pods.go:61] "kube-apiserver-ha-189125-m03" [51f30627-fb00-4c82-a07f-e4b43a1e1575] Running
	I0818 18:59:22.088991   25471 system_pods.go:61] "kube-controller-manager-ha-189125" [97597204-06d9-4bd5-946d-3f429d2f0d35] Running
	I0818 18:59:22.088997   25471 system_pods.go:61] "kube-controller-manager-ha-189125-m02" [1a866408-5605-49f1-b183-a0c438685633] Running
	I0818 18:59:22.089004   25471 system_pods.go:61] "kube-controller-manager-ha-189125-m03" [128f040d-6a09-4c72-bf20-b7289d2a0708] Running
	I0818 18:59:22.089010   25471 system_pods.go:61] "kube-proxy-22f8v" [446b7123-e92b-4ce3-b3a4-d096e00ea7e9] Running
	I0818 18:59:22.089017   25471 system_pods.go:61] "kube-proxy-96xwx" [c3f6dfae-e097-4889-933b-433f1b6b78fe] Running
	I0818 18:59:22.089025   25471 system_pods.go:61] "kube-proxy-scwlr" [03131eab-be49-4cb1-a0a6-1349f0f8eef7] Running
	I0818 18:59:22.089028   25471 system_pods.go:61] "kube-scheduler-ha-189125" [48202e0e-cebc-47fd-b18a-1dc6372caf8a] Running
	I0818 18:59:22.089034   25471 system_pods.go:61] "kube-scheduler-ha-189125-m02" [cc583916-30b6-46a6-ab8a-651f68065443] Running
	I0818 18:59:22.089037   25471 system_pods.go:61] "kube-scheduler-ha-189125-m03" [c73cba87-81c0-4389-94f3-21b49a085a05] Running
	I0818 18:59:22.089041   25471 system_pods.go:61] "kube-vip-ha-189125" [0546880a-99fa-4d9a-a754-586b3b7921ee] Running
	I0818 18:59:22.089044   25471 system_pods.go:61] "kube-vip-ha-189125-m02" [ad04a007-45f2-4a01-97e3-202fa39a028a] Running
	I0818 18:59:22.089049   25471 system_pods.go:61] "kube-vip-ha-189125-m03" [993160f6-c484-4e27-9db6-733bf0839bec] Running
	I0818 18:59:22.089052   25471 system_pods.go:61] "storage-provisioner" [35b948dd-9b74-4f76-9cdb-82e0901fc421] Running
	I0818 18:59:22.089058   25471 system_pods.go:74] duration metric: took 186.266555ms to wait for pod list to return data ...
	I0818 18:59:22.089068   25471 default_sa.go:34] waiting for default service account to be created ...
	I0818 18:59:22.274502   25471 request.go:632] Waited for 185.354556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/default/serviceaccounts
	I0818 18:59:22.274553   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/default/serviceaccounts
	I0818 18:59:22.274557   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:22.274564   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:22.274570   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:22.278326   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:22.278433   25471 default_sa.go:45] found service account: "default"
	I0818 18:59:22.278448   25471 default_sa.go:55] duration metric: took 189.373266ms for default service account to be created ...
	I0818 18:59:22.278457   25471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 18:59:22.474985   25471 request.go:632] Waited for 196.46034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:22.475048   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/namespaces/kube-system/pods
	I0818 18:59:22.475055   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:22.475064   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:22.475073   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:22.482161   25471 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0818 18:59:22.489079   25471 system_pods.go:86] 24 kube-system pods found
	I0818 18:59:22.489105   25471 system_pods.go:89] "coredns-6f6b679f8f-7xr26" [d4354313-0e2d-4d96-9cd1-a8f69a4aee26] Running
	I0818 18:59:22.489110   25471 system_pods.go:89] "coredns-6f6b679f8f-q9j97" [1f1c0597-6624-4a3e-8356-7d23555c2809] Running
	I0818 18:59:22.489114   25471 system_pods.go:89] "etcd-ha-189125" [441d8b87-bb19-479f-86a3-eda66e820a81] Running
	I0818 18:59:22.489118   25471 system_pods.go:89] "etcd-ha-189125-m02" [b656f93e-ece8-41c0-b109-584cf52e7b64] Running
	I0818 18:59:22.489121   25471 system_pods.go:89] "etcd-ha-189125-m03" [6e53b8eb-e64c-48db-8b5d-cd7c0dca3be5] Running
	I0818 18:59:22.489125   25471 system_pods.go:89] "kindnet-24xql" [ba1034b3-04c9-4c64-8fde-7b45ea42f21c] Running
	I0818 18:59:22.489128   25471 system_pods.go:89] "kindnet-jwxjh" [086477c9-e6eb-403e-adc7-b15347918484] Running
	I0818 18:59:22.489132   25471 system_pods.go:89] "kindnet-qhnpv" [b23c4910-6e34-46ec-98f2-60ec7ebdd064] Running
	I0818 18:59:22.489135   25471 system_pods.go:89] "kube-apiserver-ha-189125" [707fe85b-0545-4306-aa6f-22580ddb6203] Running
	I0818 18:59:22.489138   25471 system_pods.go:89] "kube-apiserver-ha-189125-m02" [91926546-4ebb-4e81-a0eb-ffaff8d05fdc] Running
	I0818 18:59:22.489142   25471 system_pods.go:89] "kube-apiserver-ha-189125-m03" [51f30627-fb00-4c82-a07f-e4b43a1e1575] Running
	I0818 18:59:22.489146   25471 system_pods.go:89] "kube-controller-manager-ha-189125" [97597204-06d9-4bd5-946d-3f429d2f0d35] Running
	I0818 18:59:22.489153   25471 system_pods.go:89] "kube-controller-manager-ha-189125-m02" [1a866408-5605-49f1-b183-a0c438685633] Running
	I0818 18:59:22.489157   25471 system_pods.go:89] "kube-controller-manager-ha-189125-m03" [128f040d-6a09-4c72-bf20-b7289d2a0708] Running
	I0818 18:59:22.489161   25471 system_pods.go:89] "kube-proxy-22f8v" [446b7123-e92b-4ce3-b3a4-d096e00ea7e9] Running
	I0818 18:59:22.489165   25471 system_pods.go:89] "kube-proxy-96xwx" [c3f6dfae-e097-4889-933b-433f1b6b78fe] Running
	I0818 18:59:22.489172   25471 system_pods.go:89] "kube-proxy-scwlr" [03131eab-be49-4cb1-a0a6-1349f0f8eef7] Running
	I0818 18:59:22.489176   25471 system_pods.go:89] "kube-scheduler-ha-189125" [48202e0e-cebc-47fd-b18a-1dc6372caf8a] Running
	I0818 18:59:22.489179   25471 system_pods.go:89] "kube-scheduler-ha-189125-m02" [cc583916-30b6-46a6-ab8a-651f68065443] Running
	I0818 18:59:22.489185   25471 system_pods.go:89] "kube-scheduler-ha-189125-m03" [c73cba87-81c0-4389-94f3-21b49a085a05] Running
	I0818 18:59:22.489188   25471 system_pods.go:89] "kube-vip-ha-189125" [0546880a-99fa-4d9a-a754-586b3b7921ee] Running
	I0818 18:59:22.489194   25471 system_pods.go:89] "kube-vip-ha-189125-m02" [ad04a007-45f2-4a01-97e3-202fa39a028a] Running
	I0818 18:59:22.489197   25471 system_pods.go:89] "kube-vip-ha-189125-m03" [993160f6-c484-4e27-9db6-733bf0839bec] Running
	I0818 18:59:22.489202   25471 system_pods.go:89] "storage-provisioner" [35b948dd-9b74-4f76-9cdb-82e0901fc421] Running
	I0818 18:59:22.489207   25471 system_pods.go:126] duration metric: took 210.743641ms to wait for k8s-apps to be running ...
	I0818 18:59:22.489216   25471 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 18:59:22.489259   25471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:59:22.504355   25471 system_svc.go:56] duration metric: took 15.129698ms WaitForService to wait for kubelet
	I0818 18:59:22.504386   25471 kubeadm.go:582] duration metric: took 23.628804308s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:59:22.504409   25471 node_conditions.go:102] verifying NodePressure condition ...
	I0818 18:59:22.674529   25471 request.go:632] Waited for 170.025672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.49:8443/api/v1/nodes
	I0818 18:59:22.674579   25471 round_trippers.go:463] GET https://192.168.39.49:8443/api/v1/nodes
	I0818 18:59:22.674584   25471 round_trippers.go:469] Request Headers:
	I0818 18:59:22.674591   25471 round_trippers.go:473]     Accept: application/json, */*
	I0818 18:59:22.674596   25471 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0818 18:59:22.678464   25471 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0818 18:59:22.679432   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:59:22.679453   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:59:22.679465   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:59:22.679469   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:59:22.679473   25471 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:59:22.679476   25471 node_conditions.go:123] node cpu capacity is 2
	I0818 18:59:22.679480   25471 node_conditions.go:105] duration metric: took 175.058999ms to run NodePressure ...
	I0818 18:59:22.679497   25471 start.go:241] waiting for startup goroutines ...
	I0818 18:59:22.679519   25471 start.go:255] writing updated cluster config ...
	I0818 18:59:22.679798   25471 ssh_runner.go:195] Run: rm -f paused
	I0818 18:59:22.731773   25471 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 18:59:22.733662   25471 out.go:177] * Done! kubectl is now configured to use "ha-189125" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.562688448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007841562665390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4078cc8d-a26a-4a94-8396-0b8a977a1e30 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.563560907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bf881d5-2cf9-42c7-be21-62e72aa87fe2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.563632473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5bf881d5-2cf9-42c7-be21-62e72aa87fe2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.563908950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724007567164495295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379297265805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379300678582,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135c67495a2cb89c56cca8caa093d8714a7ece48cf73f39e05fc0621bed72a37,PodSandboxId:2c884bafa871e9c85f2aea2fb886dbb448272034e6a94d3664290ffe5f8855fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724007379193169633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724007367338546266,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172400736
3376529475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9,PodSandboxId:04309b5215c4dc8fe94f1ba5fdb3ac8c79160d733be44be461dc6a09e6064091,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172400735438
7697025,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb7d6df05e3ce11ba7b3990f13150037,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724007351593170943,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0,PodSandboxId:05702b9002160611e66e662a1b238091c7a6f7a831c1393eab43feff845a4b73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724007351541675105,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724007351506819073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036,PodSandboxId:3e5f93e63a1d2a9b39ac0e4225131948fd1257f41a95a2e7da309f7c12bb103c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724007351474718818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5bf881d5-2cf9-42c7-be21-62e72aa87fe2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.612549016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31735359-4f0f-403b-8b2c-7ce26a07254c name=/runtime.v1.RuntimeService/Version
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.612659738Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31735359-4f0f-403b-8b2c-7ce26a07254c name=/runtime.v1.RuntimeService/Version
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.615124942Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e89904f7-444a-4e9b-888e-a99ded783c09 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.616716048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007841616686699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e89904f7-444a-4e9b-888e-a99ded783c09 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.617938682Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b736210-3e21-423d-9843-c1cdecc0f3e3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.619035834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b736210-3e21-423d-9843-c1cdecc0f3e3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.619461474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724007567164495295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379297265805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379300678582,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135c67495a2cb89c56cca8caa093d8714a7ece48cf73f39e05fc0621bed72a37,PodSandboxId:2c884bafa871e9c85f2aea2fb886dbb448272034e6a94d3664290ffe5f8855fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724007379193169633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724007367338546266,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172400736
3376529475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9,PodSandboxId:04309b5215c4dc8fe94f1ba5fdb3ac8c79160d733be44be461dc6a09e6064091,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172400735438
7697025,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb7d6df05e3ce11ba7b3990f13150037,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724007351593170943,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0,PodSandboxId:05702b9002160611e66e662a1b238091c7a6f7a831c1393eab43feff845a4b73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724007351541675105,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724007351506819073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036,PodSandboxId:3e5f93e63a1d2a9b39ac0e4225131948fd1257f41a95a2e7da309f7c12bb103c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724007351474718818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b736210-3e21-423d-9843-c1cdecc0f3e3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.666563091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cdd04fcd-9389-4a10-9fad-aa8f5172b7e1 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.666657586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cdd04fcd-9389-4a10-9fad-aa8f5172b7e1 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.667884503Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0040793a-dc84-44f0-98d6-258a6d46592f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.668386050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007841668362362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0040793a-dc84-44f0-98d6-258a6d46592f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.669056464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b36def3f-726b-4191-a738-bab2142a277a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.669194321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b36def3f-726b-4191-a738-bab2142a277a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.669448383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724007567164495295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379297265805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379300678582,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135c67495a2cb89c56cca8caa093d8714a7ece48cf73f39e05fc0621bed72a37,PodSandboxId:2c884bafa871e9c85f2aea2fb886dbb448272034e6a94d3664290ffe5f8855fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724007379193169633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724007367338546266,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172400736
3376529475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9,PodSandboxId:04309b5215c4dc8fe94f1ba5fdb3ac8c79160d733be44be461dc6a09e6064091,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172400735438
7697025,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb7d6df05e3ce11ba7b3990f13150037,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724007351593170943,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0,PodSandboxId:05702b9002160611e66e662a1b238091c7a6f7a831c1393eab43feff845a4b73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724007351541675105,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724007351506819073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036,PodSandboxId:3e5f93e63a1d2a9b39ac0e4225131948fd1257f41a95a2e7da309f7c12bb103c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724007351474718818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b36def3f-726b-4191-a738-bab2142a277a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.706636966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b1e33a5-d511-4b4f-a4cb-22cb020af1ed name=/runtime.v1.RuntimeService/Version
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.706725142Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b1e33a5-d511-4b4f-a4cb-22cb020af1ed name=/runtime.v1.RuntimeService/Version
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.707807337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2df0fbb2-e4d2-4034-8546-8ce7ff3027be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.708344842Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007841708322223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2df0fbb2-e4d2-4034-8546-8ce7ff3027be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.708862678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=470b6f62-d85c-46e5-bd22-dbe38bbf4045 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.708913319Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=470b6f62-d85c-46e5-bd22-dbe38bbf4045 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:04:01 ha-189125 crio[685]: time="2024-08-18 19:04:01.709208904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724007567164495295,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379297265805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724007379300678582,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:135c67495a2cb89c56cca8caa093d8714a7ece48cf73f39e05fc0621bed72a37,PodSandboxId:2c884bafa871e9c85f2aea2fb886dbb448272034e6a94d3664290ffe5f8855fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724007379193169633,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724007367338546266,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172400736
3376529475,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9,PodSandboxId:04309b5215c4dc8fe94f1ba5fdb3ac8c79160d733be44be461dc6a09e6064091,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172400735438
7697025,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb7d6df05e3ce11ba7b3990f13150037,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724007351593170943,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0,PodSandboxId:05702b9002160611e66e662a1b238091c7a6f7a831c1393eab43feff845a4b73,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724007351541675105,Labels:map[string]string{io.kubernetes.container.name: kube-control
ler-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724007351506819073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,
io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036,PodSandboxId:3e5f93e63a1d2a9b39ac0e4225131948fd1257f41a95a2e7da309f7c12bb103c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724007351474718818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=470b6f62-d85c-46e5-bd22-dbe38bbf4045 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1cbf1a420990c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   8cdf7a8433c4d       busybox-7dff88458-kxdwj
	181bcd36f89b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   c4e0fe307dc97       coredns-6f6b679f8f-q9j97
	f095c1d3ba818       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   0e090955bb301       coredns-6f6b679f8f-7xr26
	135c67495a2cb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   2c884bafa871e       storage-provisioner
	197dd2bffa6c8       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   c93b973b05129       kindnet-jwxjh
	d3f078fad6871       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   c28cd1212a8c0       kube-proxy-96xwx
	f9e43e0af59e6       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   04309b5215c4d       kube-vip-ha-189125
	79fc87641651d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago       Running             etcd                      0                   b20bbedf6c011       etcd-ha-189125
	972d7a97ac9ef       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago       Running             kube-controller-manager   0                   05702b9002160       kube-controller-manager-ha-189125
	8eb7a6513c9b9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago       Running             kube-scheduler            0                   6fe0bbacb48d2       kube-scheduler-ha-189125
	2d4a0eeafb631       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago       Running             kube-apiserver            0                   3e5f93e63a1d2       kube-apiserver-ha-189125
	
	
	==> coredns [181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2] <==
	[INFO] 10.244.1.2:55994 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151302s
	[INFO] 10.244.1.2:48950 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013492s
	[INFO] 10.244.1.2:59880 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115335s
	[INFO] 10.244.2.2:57275 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233377s
	[INFO] 10.244.2.2:56571 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00135054s
	[INFO] 10.244.2.2:43437 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086979s
	[INFO] 10.244.0.4:53861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002025942s
	[INFO] 10.244.0.4:36847 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001326246s
	[INFO] 10.244.0.4:36223 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073856s
	[INFO] 10.244.0.4:53397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051079s
	[INFO] 10.244.0.4:60257 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077527s
	[INFO] 10.244.1.2:36105 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142033s
	[INFO] 10.244.2.2:43159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120043s
	[INFO] 10.244.2.2:48451 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105513s
	[INFO] 10.244.2.2:40617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090209s
	[INFO] 10.244.2.2:53467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079345s
	[INFO] 10.244.0.4:34375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009177s
	[INFO] 10.244.0.4:47256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098542s
	[INFO] 10.244.0.4:38739 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087517s
	[INFO] 10.244.1.2:44329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157424s
	[INFO] 10.244.1.2:52970 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000328904s
	[INFO] 10.244.2.2:35139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010364s
	[INFO] 10.244.2.2:51553 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000143049s
	[INFO] 10.244.0.4:55737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097209s
	[INFO] 10.244.0.4:56754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040314s
	
	
	==> coredns [f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107] <==
	[INFO] 10.244.2.2:41178 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000198322s
	[INFO] 10.244.2.2:50482 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000139628s
	[INFO] 10.244.2.2:44346 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000148959s
	[INFO] 10.244.0.4:60109 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000098283s
	[INFO] 10.244.0.4:50813 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001489232s
	[INFO] 10.244.1.2:44640 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003618953s
	[INFO] 10.244.1.2:37984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161286s
	[INFO] 10.244.2.2:55904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150006s
	[INFO] 10.244.2.2:38276 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00189507s
	[INFO] 10.244.2.2:42054 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179179s
	[INFO] 10.244.2.2:35911 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190164s
	[INFO] 10.244.2.2:52357 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163157s
	[INFO] 10.244.0.4:38374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136266s
	[INFO] 10.244.0.4:33983 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103666s
	[INFO] 10.244.0.4:42233 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069982s
	[INFO] 10.244.1.2:39502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134749s
	[INFO] 10.244.1.2:38715 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102799s
	[INFO] 10.244.1.2:55122 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135608s
	[INFO] 10.244.0.4:56934 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000488s
	[INFO] 10.244.1.2:45200 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251667s
	[INFO] 10.244.1.2:35239 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131205s
	[INFO] 10.244.2.2:47108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152092s
	[INFO] 10.244.2.2:45498 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093397s
	[INFO] 10.244.0.4:52889 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059058s
	[INFO] 10.244.0.4:55998 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042989s
	
	
	==> describe nodes <==
	Name:               ha-189125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T18_55_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:55:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:03:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 18:59:32 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 18:59:32 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 18:59:32 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 18:59:32 +0000   Sun, 18 Aug 2024 18:56:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.49
	  Hostname:    ha-189125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9520f8bfe7ab47fca640aa213dbc51c5
	  System UUID:                9520f8bf-e7ab-47fc-a640-aa213dbc51c5
	  Boot ID:                    d5000132-c81a-4416-b5cd-bc4cc58a7c4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kxdwj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 coredns-6f6b679f8f-7xr26             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m
	  kube-system                 coredns-6f6b679f8f-q9j97             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m
	  kube-system                 etcd-ha-189125                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m7s
	  kube-system                 kindnet-jwxjh                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m1s
	  kube-system                 kube-apiserver-ha-189125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 kube-controller-manager-ha-189125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 kube-proxy-96xwx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 kube-scheduler-ha-189125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 kube-vip-ha-189125                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m58s  kube-proxy       
	  Normal  Starting                 8m5s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m5s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m5s   kubelet          Node ha-189125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m5s   kubelet          Node ha-189125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m5s   kubelet          Node ha-189125 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m1s   node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal  NodeReady                7m44s  kubelet          Node ha-189125 status is now: NodeReady
	  Normal  RegisteredNode           6m13s  node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal  RegisteredNode           4m59s  node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	
	
	Name:               ha-189125-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T18_57_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:57:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:00:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 18 Aug 2024 18:59:43 +0000   Sun, 18 Aug 2024 19:01:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 18 Aug 2024 18:59:43 +0000   Sun, 18 Aug 2024 19:01:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 18 Aug 2024 18:59:43 +0000   Sun, 18 Aug 2024 19:01:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 18 Aug 2024 18:59:43 +0000   Sun, 18 Aug 2024 19:01:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    ha-189125-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3324dc2b927f496881437c52ed831dff
	  System UUID:                3324dc2b-927f-4968-8143-7c52ed831dff
	  Boot ID:                    6101e739-12c5-4cc4-a553-76e9cbc2860b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8bwfj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 etcd-ha-189125-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m19s
	  kube-system                 kindnet-qhnpv                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m21s
	  kube-system                 kube-apiserver-ha-189125-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-controller-manager-ha-189125-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-proxy-scwlr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-ha-189125-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-vip-ha-189125-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m16s                  kube-proxy       
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m22s)  kubelet          Node ha-189125-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m22s)  kubelet          Node ha-189125-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m22s)  kubelet          Node ha-189125-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m13s                  node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  NodeNotReady             2m46s                  node-controller  Node ha-189125-m02 status is now: NodeNotReady
	
	
	Name:               ha-189125-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T18_58_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:58:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:04:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 18:59:56 +0000   Sun, 18 Aug 2024 18:58:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 18:59:56 +0000   Sun, 18 Aug 2024 18:58:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 18:59:56 +0000   Sun, 18 Aug 2024 18:58:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 18:59:56 +0000   Sun, 18 Aug 2024 18:59:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-189125-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d3ec6cee66841f19e0d0001d5bf49e3
	  System UUID:                4d3ec6ce-e668-41f1-9e0d-0001d5bf49e3
	  Boot ID:                    585df22f-cf7d-498d-8ff9-1aca3ea7e00a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fvdcn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 etcd-ha-189125-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m6s
	  kube-system                 kindnet-24xql                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m8s
	  kube-system                 kube-apiserver-ha-189125-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-controller-manager-ha-189125-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-proxy-22f8v                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-scheduler-ha-189125-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-vip-ha-189125-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node ha-189125-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node ha-189125-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m8s)  kubelet          Node ha-189125-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m6s                 node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	  Normal  RegisteredNode           5m3s                 node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	  Normal  RegisteredNode           4m59s                node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	
	
	Name:               ha-189125-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T19_00_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:00:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:03:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:00:31 +0000   Sun, 18 Aug 2024 19:00:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:00:31 +0000   Sun, 18 Aug 2024 19:00:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:00:31 +0000   Sun, 18 Aug 2024 19:00:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:00:31 +0000   Sun, 18 Aug 2024 19:00:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    ha-189125-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aaeec6aea01d4746832fda2dc541437c
	  System UUID:                aaeec6ae-a01d-4746-832f-da2dc541437c
	  Boot ID:                    2ec6b825-44fb-4ba0-9681-61c7a55de5a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24hmx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m2s
	  kube-system                 kube-proxy-krtg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x2 over 4m3s)  kubelet          Node ha-189125-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x2 over 4m3s)  kubelet          Node ha-189125-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x2 over 4m3s)  kubelet          Node ha-189125-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal  NodeReady                3m41s                kubelet          Node ha-189125-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug18 18:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050548] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039074] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.782036] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.495044] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.591693] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.511172] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053311] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195743] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.133817] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.270401] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.027475] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +4.080385] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.059467] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140089] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.075123] kauditd_printk_skb: 79 callbacks suppressed
	[Aug18 18:56] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.023234] kauditd_printk_skb: 36 callbacks suppressed
	[Aug18 18:57] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125] <==
	{"level":"warn","ts":"2024-08-18T19:04:01.975854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:01.978005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:01.979608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:01.991561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:01.996352Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.005195Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.011802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.017739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.018816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.023558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.027795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.033676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.038996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.045245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.052247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.056875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.065271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.071353Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.076953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.082956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.086262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.091002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.098264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.106608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-18T19:04:02.117706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7f2a407b6bb4eb12","from":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:04:02 up 8 min,  0 users,  load average: 0.06, 0.14, 0.09
	Linux ha-189125 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6] <==
	I0818 19:03:28.452474       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:03:38.455950       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:03:38.456012       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:03:38.456197       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:03:38.456206       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:03:38.456262       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:03:38.456267       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:03:38.456317       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:03:38.456341       1 main.go:299] handling current node
	I0818 19:03:48.453015       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:03:48.453059       1 main.go:299] handling current node
	I0818 19:03:48.453073       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:03:48.453120       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:03:48.453264       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:03:48.453287       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:03:48.453352       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:03:48.453357       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:03:58.453932       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:03:58.454042       1 main.go:299] handling current node
	I0818 19:03:58.454156       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:03:58.454184       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:03:58.454334       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:03:58.454358       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:03:58.454415       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:03:58.454433       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036] <==
	W0818 18:55:56.382944       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.49]
	I0818 18:55:56.383990       1 controller.go:615] quota admission added evaluator for: endpoints
	I0818 18:55:56.390416       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0818 18:55:56.606559       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0818 18:55:57.547886       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0818 18:55:57.562444       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0818 18:55:57.576559       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0818 18:56:01.366139       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0818 18:56:02.263130       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0818 18:59:28.369785       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54992: use of closed network connection
	E0818 18:59:28.558513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55012: use of closed network connection
	E0818 18:59:28.742483       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55034: use of closed network connection
	E0818 18:59:28.943998       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55042: use of closed network connection
	E0818 18:59:29.123328       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55064: use of closed network connection
	E0818 18:59:29.294453       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55086: use of closed network connection
	E0818 18:59:29.469439       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55094: use of closed network connection
	E0818 18:59:29.648527       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55102: use of closed network connection
	E0818 18:59:29.809407       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55106: use of closed network connection
	E0818 18:59:30.094649       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55150: use of closed network connection
	E0818 18:59:30.279626       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55178: use of closed network connection
	E0818 18:59:30.453032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55194: use of closed network connection
	E0818 18:59:30.628588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55198: use of closed network connection
	E0818 18:59:30.795912       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55206: use of closed network connection
	E0818 18:59:30.992016       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55214: use of closed network connection
	W0818 19:00:56.400647       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.49]
	
	
	==> kube-controller-manager [972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0] <==
	I0818 19:00:00.302291       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-189125-m04\" does not exist"
	I0818 19:00:00.340342       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-189125-m04" podCIDRs=["10.244.3.0/24"]
	I0818 19:00:00.340413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:00.340450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:00.617248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:00.991453       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:01.377709       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-189125-m04"
	I0818 19:00:01.485256       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:03.700873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:03.728731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:04.827124       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:05.076796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:10.560341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:21.897937       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-189125-m04"
	I0818 19:00:21.898570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:21.918149       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:23.721783       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:00:31.086068       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:01:16.408611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-189125-m04"
	I0818 19:01:16.409701       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m02"
	I0818 19:01:16.439469       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m02"
	I0818 19:01:16.507571       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.846361ms"
	I0818 19:01:16.507745       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="76.486µs"
	I0818 19:01:18.818484       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m02"
	I0818 19:01:21.592183       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m02"
	
	
	==> kube-proxy [d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:56:03.608539       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:56:03.625403       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.49"]
	E0818 18:56:03.625483       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:56:03.667004       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:56:03.667048       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:56:03.667128       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:56:03.669742       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:56:03.670274       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:56:03.670298       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:56:03.672205       1 config.go:197] "Starting service config controller"
	I0818 18:56:03.672291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:56:03.672527       1 config.go:326] "Starting node config controller"
	I0818 18:56:03.672553       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:56:03.673050       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:56:03.673190       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:56:03.773288       1 shared_informer.go:320] Caches are synced for node config
	I0818 18:56:03.773384       1 shared_informer.go:320] Caches are synced for service config
	I0818 18:56:03.776665       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058] <==
	W0818 18:55:55.891235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 18:55:55.891329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:55:55.970802       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 18:55:55.970854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 18:55:55.975170       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 18:55:55.975215       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0818 18:55:58.645332       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0818 18:58:54.897380       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-22f8v\": pod kube-proxy-22f8v is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-22f8v" node="ha-189125-m03"
	E0818 18:58:54.897530       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 446b7123-e92b-4ce3-b3a4-d096e00ea7e9(kube-system/kube-proxy-22f8v) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-22f8v"
	E0818 18:58:54.897583       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-22f8v\": pod kube-proxy-22f8v is already assigned to node \"ha-189125-m03\"" pod="kube-system/kube-proxy-22f8v"
	I0818 18:58:54.897633       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-22f8v" node="ha-189125-m03"
	E0818 18:58:54.898809       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24xql\": pod kindnet-24xql is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-24xql" node="ha-189125-m03"
	E0818 18:58:54.898876       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba1034b3-04c9-4c64-8fde-7b45ea42f21c(kube-system/kindnet-24xql) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24xql"
	E0818 18:58:54.898900       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24xql\": pod kindnet-24xql is already assigned to node \"ha-189125-m03\"" pod="kube-system/kindnet-24xql"
	I0818 18:58:54.898918       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24xql" node="ha-189125-m03"
	E0818 18:59:23.602753       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8bwfj\": pod busybox-7dff88458-8bwfj is already assigned to node \"ha-189125-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8bwfj" node="ha-189125-m02"
	E0818 18:59:23.602879       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8bwfj\": pod busybox-7dff88458-8bwfj is already assigned to node \"ha-189125-m02\"" pod="default/busybox-7dff88458-8bwfj"
	E0818 18:59:23.652419       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fvdcn\": pod busybox-7dff88458-fvdcn is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fvdcn" node="ha-189125-m03"
	E0818 18:59:23.652848       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 19fc5686-7021-4b6f-a097-71f7b6d6a76e(default/busybox-7dff88458-fvdcn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fvdcn"
	E0818 18:59:23.652953       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fvdcn\": pod busybox-7dff88458-fvdcn is already assigned to node \"ha-189125-m03\"" pod="default/busybox-7dff88458-fvdcn"
	I0818 18:59:23.653004       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fvdcn" node="ha-189125-m03"
	E0818 18:59:23.653552       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kxdwj\": pod busybox-7dff88458-kxdwj is already assigned to node \"ha-189125\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-kxdwj" node="ha-189125"
	E0818 18:59:23.655579       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e2ebdc21-75ca-43ac-86f2-7c492eefe97d(default/busybox-7dff88458-kxdwj) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-kxdwj"
	E0818 18:59:23.655718       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kxdwj\": pod busybox-7dff88458-kxdwj is already assigned to node \"ha-189125\"" pod="default/busybox-7dff88458-kxdwj"
	I0818 18:59:23.655773       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-kxdwj" node="ha-189125"
	
	
	==> kubelet <==
	Aug 18 19:02:47 ha-189125 kubelet[1332]: E0818 19:02:47.629733    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007767629013053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:57 ha-189125 kubelet[1332]: E0818 19:02:57.530806    1332 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:02:57 ha-189125 kubelet[1332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:02:57 ha-189125 kubelet[1332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:02:57 ha-189125 kubelet[1332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:02:57 ha-189125 kubelet[1332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:02:57 ha-189125 kubelet[1332]: E0818 19:02:57.632173    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007777631413993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:02:57 ha-189125 kubelet[1332]: E0818 19:02:57.632257    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007777631413993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:07 ha-189125 kubelet[1332]: E0818 19:03:07.635123    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007787634459889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:07 ha-189125 kubelet[1332]: E0818 19:03:07.635153    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007787634459889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:17 ha-189125 kubelet[1332]: E0818 19:03:17.636960    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007797636617537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:17 ha-189125 kubelet[1332]: E0818 19:03:17.637002    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007797636617537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:27 ha-189125 kubelet[1332]: E0818 19:03:27.640175    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007807639610828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:27 ha-189125 kubelet[1332]: E0818 19:03:27.640227    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007807639610828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:37 ha-189125 kubelet[1332]: E0818 19:03:37.642520    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007817642229575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:37 ha-189125 kubelet[1332]: E0818 19:03:37.642570    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007817642229575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:47 ha-189125 kubelet[1332]: E0818 19:03:47.644882    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007827644325118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:47 ha-189125 kubelet[1332]: E0818 19:03:47.645637    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007827644325118,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:57 ha-189125 kubelet[1332]: E0818 19:03:57.531225    1332 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:03:57 ha-189125 kubelet[1332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:03:57 ha-189125 kubelet[1332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:03:57 ha-189125 kubelet[1332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:03:57 ha-189125 kubelet[1332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:03:57 ha-189125 kubelet[1332]: E0818 19:03:57.647883    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007837647564443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:03:57 ha-189125 kubelet[1332]: E0818 19:03:57.647914    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724007837647564443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-189125 -n ha-189125
helpers_test.go:261: (dbg) Run:  kubectl --context ha-189125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (55.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (403.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-189125 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-189125 -v=7 --alsologtostderr
E0818 19:04:26.646231   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:04:54.348065   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-189125 -v=7 --alsologtostderr: exit status 82 (2m1.943386273s)

                                                
                                                
-- stdout --
	* Stopping node "ha-189125-m04"  ...
	* Stopping node "ha-189125-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:04:03.592806   31468 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:04:03.593050   31468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:04:03.593058   31468 out.go:358] Setting ErrFile to fd 2...
	I0818 19:04:03.593061   31468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:04:03.593265   31468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:04:03.593466   31468 out.go:352] Setting JSON to false
	I0818 19:04:03.593547   31468 mustload.go:65] Loading cluster: ha-189125
	I0818 19:04:03.593925   31468 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:04:03.594016   31468 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 19:04:03.594186   31468 mustload.go:65] Loading cluster: ha-189125
	I0818 19:04:03.594315   31468 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:04:03.594348   31468 stop.go:39] StopHost: ha-189125-m04
	I0818 19:04:03.594731   31468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:03.594766   31468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:03.609152   31468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38197
	I0818 19:04:03.609622   31468 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:03.610234   31468 main.go:141] libmachine: Using API Version  1
	I0818 19:04:03.610256   31468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:03.610607   31468 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:03.613262   31468 out.go:177] * Stopping node "ha-189125-m04"  ...
	I0818 19:04:03.614696   31468 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 19:04:03.614731   31468 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:04:03.614954   31468 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 19:04:03.614987   31468 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:04:03.617650   31468 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:04:03.618085   31468 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:59:46 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:04:03.618129   31468 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:04:03.618253   31468 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:04:03.618428   31468 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:04:03.618631   31468 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:04:03.618815   31468 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:04:03.706823   31468 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0818 19:04:03.760588   31468 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0818 19:04:03.814780   31468 main.go:141] libmachine: Stopping "ha-189125-m04"...
	I0818 19:04:03.814820   31468 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:04:03.816400   31468 main.go:141] libmachine: (ha-189125-m04) Calling .Stop
	I0818 19:04:03.820279   31468 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 0/120
	I0818 19:04:05.054029   31468 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:04:05.055701   31468 main.go:141] libmachine: Machine "ha-189125-m04" was stopped.
	I0818 19:04:05.055716   31468 stop.go:75] duration metric: took 1.44102398s to stop
	I0818 19:04:05.055745   31468 stop.go:39] StopHost: ha-189125-m03
	I0818 19:04:05.056018   31468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:04:05.056051   31468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:04:05.070505   31468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38015
	I0818 19:04:05.070908   31468 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:04:05.071469   31468 main.go:141] libmachine: Using API Version  1
	I0818 19:04:05.071494   31468 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:04:05.071815   31468 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:04:05.074042   31468 out.go:177] * Stopping node "ha-189125-m03"  ...
	I0818 19:04:05.075439   31468 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 19:04:05.075462   31468 main.go:141] libmachine: (ha-189125-m03) Calling .DriverName
	I0818 19:04:05.075673   31468 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 19:04:05.075691   31468 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHHostname
	I0818 19:04:05.078638   31468 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:04:05.079076   31468 main.go:141] libmachine: (ha-189125-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:db:3a", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:58:21 +0000 UTC Type:0 Mac:52:54:00:df:db:3a Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-189125-m03 Clientid:01:52:54:00:df:db:3a}
	I0818 19:04:05.079098   31468 main.go:141] libmachine: (ha-189125-m03) DBG | domain ha-189125-m03 has defined IP address 192.168.39.170 and MAC address 52:54:00:df:db:3a in network mk-ha-189125
	I0818 19:04:05.079331   31468 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHPort
	I0818 19:04:05.079508   31468 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHKeyPath
	I0818 19:04:05.079635   31468 main.go:141] libmachine: (ha-189125-m03) Calling .GetSSHUsername
	I0818 19:04:05.079819   31468 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m03/id_rsa Username:docker}
	I0818 19:04:05.181276   31468 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0818 19:04:05.239069   31468 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0818 19:04:05.294924   31468 main.go:141] libmachine: Stopping "ha-189125-m03"...
	I0818 19:04:05.294955   31468 main.go:141] libmachine: (ha-189125-m03) Calling .GetState
	I0818 19:04:05.296474   31468 main.go:141] libmachine: (ha-189125-m03) Calling .Stop
	I0818 19:04:05.299880   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 0/120
	I0818 19:04:06.302064   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 1/120
	I0818 19:04:07.303470   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 2/120
	I0818 19:04:08.304819   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 3/120
	I0818 19:04:09.306204   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 4/120
	I0818 19:04:10.308341   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 5/120
	I0818 19:04:11.310117   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 6/120
	I0818 19:04:12.311440   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 7/120
	I0818 19:04:13.312891   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 8/120
	I0818 19:04:14.314297   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 9/120
	I0818 19:04:15.316013   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 10/120
	I0818 19:04:16.317801   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 11/120
	I0818 19:04:17.318990   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 12/120
	I0818 19:04:18.320502   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 13/120
	I0818 19:04:19.321696   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 14/120
	I0818 19:04:20.323705   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 15/120
	I0818 19:04:21.324990   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 16/120
	I0818 19:04:22.326265   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 17/120
	I0818 19:04:23.327423   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 18/120
	I0818 19:04:24.328867   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 19/120
	I0818 19:04:25.330482   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 20/120
	I0818 19:04:26.331694   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 21/120
	I0818 19:04:27.333859   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 22/120
	I0818 19:04:28.335155   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 23/120
	I0818 19:04:29.336758   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 24/120
	I0818 19:04:30.338468   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 25/120
	I0818 19:04:31.339838   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 26/120
	I0818 19:04:32.341295   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 27/120
	I0818 19:04:33.342940   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 28/120
	I0818 19:04:34.344788   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 29/120
	I0818 19:04:35.347240   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 30/120
	I0818 19:04:36.348982   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 31/120
	I0818 19:04:37.350486   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 32/120
	I0818 19:04:38.351934   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 33/120
	I0818 19:04:39.353784   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 34/120
	I0818 19:04:40.355820   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 35/120
	I0818 19:04:41.357868   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 36/120
	I0818 19:04:42.359347   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 37/120
	I0818 19:04:43.360904   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 38/120
	I0818 19:04:44.362512   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 39/120
	I0818 19:04:45.364395   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 40/120
	I0818 19:04:46.365778   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 41/120
	I0818 19:04:47.367127   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 42/120
	I0818 19:04:48.368487   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 43/120
	I0818 19:04:49.369695   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 44/120
	I0818 19:04:50.371540   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 45/120
	I0818 19:04:51.372983   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 46/120
	I0818 19:04:52.374291   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 47/120
	I0818 19:04:53.375713   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 48/120
	I0818 19:04:54.377086   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 49/120
	I0818 19:04:55.378915   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 50/120
	I0818 19:04:56.380361   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 51/120
	I0818 19:04:57.381981   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 52/120
	I0818 19:04:58.383434   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 53/120
	I0818 19:04:59.384755   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 54/120
	I0818 19:05:00.386520   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 55/120
	I0818 19:05:01.387992   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 56/120
	I0818 19:05:02.389385   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 57/120
	I0818 19:05:03.390861   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 58/120
	I0818 19:05:04.392468   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 59/120
	I0818 19:05:05.394240   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 60/120
	I0818 19:05:06.395730   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 61/120
	I0818 19:05:07.398021   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 62/120
	I0818 19:05:08.399612   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 63/120
	I0818 19:05:09.401917   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 64/120
	I0818 19:05:10.403665   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 65/120
	I0818 19:05:11.405263   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 66/120
	I0818 19:05:12.406553   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 67/120
	I0818 19:05:13.407776   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 68/120
	I0818 19:05:14.409811   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 69/120
	I0818 19:05:15.411452   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 70/120
	I0818 19:05:16.412735   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 71/120
	I0818 19:05:17.414061   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 72/120
	I0818 19:05:18.415551   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 73/120
	I0818 19:05:19.416856   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 74/120
	I0818 19:05:20.418962   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 75/120
	I0818 19:05:21.420346   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 76/120
	I0818 19:05:22.421787   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 77/120
	I0818 19:05:23.423157   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 78/120
	I0818 19:05:24.424515   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 79/120
	I0818 19:05:25.426379   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 80/120
	I0818 19:05:26.427853   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 81/120
	I0818 19:05:27.429178   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 82/120
	I0818 19:05:28.430777   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 83/120
	I0818 19:05:29.431999   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 84/120
	I0818 19:05:30.433722   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 85/120
	I0818 19:05:31.434841   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 86/120
	I0818 19:05:32.436134   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 87/120
	I0818 19:05:33.437473   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 88/120
	I0818 19:05:34.438826   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 89/120
	I0818 19:05:35.441087   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 90/120
	I0818 19:05:36.442402   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 91/120
	I0818 19:05:37.443910   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 92/120
	I0818 19:05:38.445097   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 93/120
	I0818 19:05:39.446895   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 94/120
	I0818 19:05:40.448471   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 95/120
	I0818 19:05:41.449714   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 96/120
	I0818 19:05:42.451221   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 97/120
	I0818 19:05:43.452618   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 98/120
	I0818 19:05:44.454198   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 99/120
	I0818 19:05:45.455410   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 100/120
	I0818 19:05:46.456854   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 101/120
	I0818 19:05:47.458608   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 102/120
	I0818 19:05:48.459958   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 103/120
	I0818 19:05:49.461322   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 104/120
	I0818 19:05:50.463545   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 105/120
	I0818 19:05:51.464870   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 106/120
	I0818 19:05:52.466438   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 107/120
	I0818 19:05:53.467912   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 108/120
	I0818 19:05:54.469223   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 109/120
	I0818 19:05:55.470760   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 110/120
	I0818 19:05:56.472673   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 111/120
	I0818 19:05:57.474077   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 112/120
	I0818 19:05:58.475565   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 113/120
	I0818 19:05:59.476898   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 114/120
	I0818 19:06:00.478590   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 115/120
	I0818 19:06:01.479870   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 116/120
	I0818 19:06:02.481191   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 117/120
	I0818 19:06:03.482426   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 118/120
	I0818 19:06:04.483897   31468 main.go:141] libmachine: (ha-189125-m03) Waiting for machine to stop 119/120
	I0818 19:06:05.484589   31468 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0818 19:06:05.484663   31468 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0818 19:06:05.486885   31468 out.go:201] 
	W0818 19:06:05.488547   31468 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0818 19:06:05.488569   31468 out.go:270] * 
	* 
	W0818 19:06:05.490800   31468 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 19:06:05.492588   31468 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-189125 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-189125 --wait=true -v=7 --alsologtostderr
E0818 19:06:44.018987   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:08:07.084242   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:09:26.646827   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-189125 --wait=true -v=7 --alsologtostderr: (4m38.727844898s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-189125
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-189125 -n ha-189125
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-189125 logs -n 25: (2.005644473s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m02:/home/docker/cp-test_ha-189125-m03_ha-189125-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m02 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04:/home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m04 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp testdata/cp-test.txt                                                | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3256308944/001/cp-test_ha-189125-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125:/home/docker/cp-test_ha-189125-m04_ha-189125.txt                       |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125 sudo cat                                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125.txt                                 |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m02:/home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m02 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03:/home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m03 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-189125 node stop m02 -v=7                                                     | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-189125 node start m02 -v=7                                                    | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-189125 -v=7                                                           | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-189125 -v=7                                                                | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-189125 --wait=true -v=7                                                    | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:06 UTC | 18 Aug 24 19:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-189125                                                                | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:10 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 19:06:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 19:06:05.538758   31924 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:06:05.538887   31924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:06:05.538898   31924 out.go:358] Setting ErrFile to fd 2...
	I0818 19:06:05.538904   31924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:06:05.539085   31924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:06:05.539715   31924 out.go:352] Setting JSON to false
	I0818 19:06:05.540725   31924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2910,"bootTime":1724005056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:06:05.540788   31924 start.go:139] virtualization: kvm guest
	I0818 19:06:05.543300   31924 out.go:177] * [ha-189125] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:06:05.545058   31924 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:06:05.545099   31924 notify.go:220] Checking for updates...
	I0818 19:06:05.548344   31924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:06:05.550157   31924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:06:05.551646   31924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:06:05.552939   31924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:06:05.554202   31924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:06:05.555908   31924 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:06:05.556012   31924 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:06:05.556464   31924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:06:05.556507   31924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:06:05.571659   31924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0818 19:06:05.572189   31924 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:06:05.572752   31924 main.go:141] libmachine: Using API Version  1
	I0818 19:06:05.572772   31924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:06:05.573101   31924 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:06:05.573269   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:06:05.609144   31924 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 19:06:05.610456   31924 start.go:297] selected driver: kvm2
	I0818 19:06:05.610477   31924 start.go:901] validating driver "kvm2" against &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:06:05.610616   31924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:06:05.610938   31924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:06:05.611029   31924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:06:05.626188   31924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:06:05.626867   31924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:06:05.626936   31924 cni.go:84] Creating CNI manager for ""
	I0818 19:06:05.626945   31924 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 19:06:05.626998   31924 start.go:340] cluster config:
	{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:06:05.627145   31924 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:06:05.629749   31924 out.go:177] * Starting "ha-189125" primary control-plane node in "ha-189125" cluster
	I0818 19:06:05.631060   31924 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:06:05.631112   31924 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 19:06:05.631137   31924 cache.go:56] Caching tarball of preloaded images
	I0818 19:06:05.631235   31924 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:06:05.631250   31924 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 19:06:05.631437   31924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 19:06:05.631646   31924 start.go:360] acquireMachinesLock for ha-189125: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:06:05.631689   31924 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "ha-189125"
	I0818 19:06:05.631703   31924 start.go:96] Skipping create...Using existing machine configuration
	I0818 19:06:05.631713   31924 fix.go:54] fixHost starting: 
	I0818 19:06:05.631994   31924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:06:05.632024   31924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:06:05.646579   31924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46445
	I0818 19:06:05.647087   31924 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:06:05.647625   31924 main.go:141] libmachine: Using API Version  1
	I0818 19:06:05.647652   31924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:06:05.647950   31924 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:06:05.648157   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:06:05.648335   31924 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:06:05.649969   31924 fix.go:112] recreateIfNeeded on ha-189125: state=Running err=<nil>
	W0818 19:06:05.649988   31924 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 19:06:05.652796   31924 out.go:177] * Updating the running kvm2 "ha-189125" VM ...
	I0818 19:06:05.654252   31924 machine.go:93] provisionDockerMachine start ...
	I0818 19:06:05.654281   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:06:05.654530   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:05.657169   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.657658   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:05.657685   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.657866   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:05.658045   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.658252   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.658388   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:05.658533   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:06:05.658756   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:06:05.658769   31924 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 19:06:05.764746   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125
	
	I0818 19:06:05.764771   31924 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 19:06:05.765026   31924 buildroot.go:166] provisioning hostname "ha-189125"
	I0818 19:06:05.765049   31924 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 19:06:05.765234   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:05.767589   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.767930   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:05.767963   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.768131   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:05.768305   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.768468   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.768608   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:05.768800   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:06:05.768968   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:06:05.768980   31924 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-189125 && echo "ha-189125" | sudo tee /etc/hostname
	I0818 19:06:05.892450   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125
	
	I0818 19:06:05.892479   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:05.895089   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.895515   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:05.895552   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.895749   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:05.895944   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.896112   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.896267   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:05.896478   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:06:05.896687   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:06:05.896712   31924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-189125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-189125/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-189125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 19:06:06.012988   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:06:06.013020   31924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 19:06:06.013050   31924 buildroot.go:174] setting up certificates
	I0818 19:06:06.013064   31924 provision.go:84] configureAuth start
	I0818 19:06:06.013073   31924 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 19:06:06.013376   31924 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:06:06.015862   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.016212   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:06.016239   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.016518   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:06.018792   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.019187   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:06.019209   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.019367   31924 provision.go:143] copyHostCerts
	I0818 19:06:06.019420   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:06:06.019468   31924 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 19:06:06.019484   31924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:06:06.019550   31924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 19:06:06.019619   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:06:06.019637   31924 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 19:06:06.019643   31924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:06:06.019666   31924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 19:06:06.019705   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:06:06.019721   31924 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 19:06:06.019727   31924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:06:06.019753   31924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 19:06:06.019795   31924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.ha-189125 san=[127.0.0.1 192.168.39.49 ha-189125 localhost minikube]
	I0818 19:06:06.169846   31924 provision.go:177] copyRemoteCerts
	I0818 19:06:06.169898   31924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 19:06:06.169920   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:06.172607   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.172994   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:06.173021   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.173168   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:06.173367   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:06.173535   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:06.173677   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:06:06.255494   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 19:06:06.255589   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0818 19:06:06.285898   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 19:06:06.285983   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 19:06:06.315455   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 19:06:06.315537   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 19:06:06.344895   31924 provision.go:87] duration metric: took 331.817623ms to configureAuth
	I0818 19:06:06.344925   31924 buildroot.go:189] setting minikube options for container-runtime
	I0818 19:06:06.345149   31924 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:06:06.345233   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:06.348058   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.348468   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:06.348499   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.348711   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:06.348917   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:06.349070   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:06.349322   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:06.349540   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:06:06.349706   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:06:06.349723   31924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 19:07:37.264588   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 19:07:37.264616   31924 machine.go:96] duration metric: took 1m31.610346753s to provisionDockerMachine
	I0818 19:07:37.264628   31924 start.go:293] postStartSetup for "ha-189125" (driver="kvm2")
	I0818 19:07:37.264639   31924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 19:07:37.264653   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.264954   31924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 19:07:37.264975   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.268186   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.268633   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.268651   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.268804   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.268979   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.269197   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.269352   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:07:37.351852   31924 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 19:07:37.355885   31924 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 19:07:37.355918   31924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 19:07:37.355987   31924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 19:07:37.356073   31924 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 19:07:37.356097   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 19:07:37.356227   31924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 19:07:37.366106   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:07:37.389557   31924 start.go:296] duration metric: took 124.916601ms for postStartSetup
	I0818 19:07:37.389602   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.389929   31924 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 19:07:37.389951   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.392622   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.392982   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.393009   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.393167   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.393351   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.393524   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.393655   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	W0818 19:07:37.478147   31924 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0818 19:07:37.478173   31924 fix.go:56] duration metric: took 1m31.84646119s for fixHost
	I0818 19:07:37.478202   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.480550   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.480917   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.480947   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.481081   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.481302   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.481425   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.481572   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.481736   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:07:37.481941   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:07:37.481953   31924 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 19:07:37.580007   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008057.547613306
	
	I0818 19:07:37.580031   31924 fix.go:216] guest clock: 1724008057.547613306
	I0818 19:07:37.580040   31924 fix.go:229] Guest: 2024-08-18 19:07:37.547613306 +0000 UTC Remote: 2024-08-18 19:07:37.478186899 +0000 UTC m=+91.975933127 (delta=69.426407ms)
	I0818 19:07:37.580091   31924 fix.go:200] guest clock delta is within tolerance: 69.426407ms
	I0818 19:07:37.580101   31924 start.go:83] releasing machines lock for "ha-189125", held for 1m31.948402543s
	I0818 19:07:37.580140   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.580359   31924 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:07:37.582711   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.583012   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.583037   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.583148   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.583608   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.583763   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.583872   31924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 19:07:37.583920   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.583948   31924 ssh_runner.go:195] Run: cat /version.json
	I0818 19:07:37.583970   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.586059   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.586340   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.586374   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.586395   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.586487   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.586650   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.586710   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.586734   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.586784   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.586908   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.586940   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:07:37.587064   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.587190   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.587323   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:07:37.660859   31924 ssh_runner.go:195] Run: systemctl --version
	I0818 19:07:37.689591   31924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 19:07:37.849280   31924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 19:07:37.858858   31924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 19:07:37.858931   31924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 19:07:37.868534   31924 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0818 19:07:37.868558   31924 start.go:495] detecting cgroup driver to use...
	I0818 19:07:37.868626   31924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 19:07:37.884705   31924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 19:07:37.898789   31924 docker.go:217] disabling cri-docker service (if available) ...
	I0818 19:07:37.898832   31924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 19:07:37.911736   31924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 19:07:37.925097   31924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 19:07:38.071610   31924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 19:07:38.215462   31924 docker.go:233] disabling docker service ...
	I0818 19:07:38.215526   31924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 19:07:38.233043   31924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 19:07:38.246968   31924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 19:07:38.390025   31924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 19:07:38.535880   31924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 19:07:38.550973   31924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 19:07:38.569703   31924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 19:07:38.569756   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.581001   31924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 19:07:38.581055   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.591705   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.602543   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.612836   31924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 19:07:38.623510   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.634002   31924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.645689   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.656232   31924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 19:07:38.665965   31924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 19:07:38.675565   31924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:07:38.822305   31924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 19:07:39.250990   31924 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 19:07:39.251050   31924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 19:07:39.256028   31924 start.go:563] Will wait 60s for crictl version
	I0818 19:07:39.256087   31924 ssh_runner.go:195] Run: which crictl
	I0818 19:07:39.260146   31924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 19:07:39.297461   31924 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 19:07:39.297549   31924 ssh_runner.go:195] Run: crio --version
	I0818 19:07:39.326109   31924 ssh_runner.go:195] Run: crio --version
	I0818 19:07:39.357322   31924 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 19:07:39.358898   31924 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:07:39.361418   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:39.361803   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:39.361828   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:39.362028   31924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 19:07:39.366700   31924 kubeadm.go:883] updating cluster {Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 19:07:39.366833   31924 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:07:39.366887   31924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:07:39.419370   31924 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:07:39.419413   31924 crio.go:433] Images already preloaded, skipping extraction
	I0818 19:07:39.419476   31924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:07:39.452680   31924 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:07:39.452699   31924 cache_images.go:84] Images are preloaded, skipping loading
	I0818 19:07:39.452707   31924 kubeadm.go:934] updating node { 192.168.39.49 8443 v1.31.0 crio true true} ...
	I0818 19:07:39.452822   31924 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-189125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 19:07:39.452908   31924 ssh_runner.go:195] Run: crio config
	I0818 19:07:39.499803   31924 cni.go:84] Creating CNI manager for ""
	I0818 19:07:39.499824   31924 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 19:07:39.499836   31924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 19:07:39.499869   31924 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.49 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-189125 NodeName:ha-189125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 19:07:39.500009   31924 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-189125"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 19:07:39.500034   31924 kube-vip.go:115] generating kube-vip config ...
	I0818 19:07:39.500081   31924 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 19:07:39.511639   31924 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 19:07:39.511782   31924 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 19:07:39.511847   31924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 19:07:39.521055   31924 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 19:07:39.521130   31924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 19:07:39.530402   31924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0818 19:07:39.546888   31924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 19:07:39.562803   31924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0818 19:07:39.578840   31924 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0818 19:07:39.596494   31924 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0818 19:07:39.600466   31924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:07:39.739066   31924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:07:39.753759   31924 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125 for IP: 192.168.39.49
	I0818 19:07:39.753780   31924 certs.go:194] generating shared ca certs ...
	I0818 19:07:39.753794   31924 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:07:39.753924   31924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 19:07:39.753960   31924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 19:07:39.753971   31924 certs.go:256] generating profile certs ...
	I0818 19:07:39.754042   31924 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key
	I0818 19:07:39.754066   31924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ef4a23ea
	I0818 19:07:39.754092   31924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ef4a23ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.49 192.168.39.147 192.168.39.170 192.168.39.254]
	I0818 19:07:39.933649   31924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ef4a23ea ...
	I0818 19:07:39.933679   31924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ef4a23ea: {Name:mkdc56597df2587c95958d3a0975f94a91bdd52d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:07:39.933872   31924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ef4a23ea ...
	I0818 19:07:39.933889   31924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ef4a23ea: {Name:mkbc917195b7b61cd9ba2cfbe30abf338bd83958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:07:39.933991   31924 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ef4a23ea -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt
	I0818 19:07:39.934195   31924 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ef4a23ea -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key
	I0818 19:07:39.934369   31924 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key
	I0818 19:07:39.934542   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 19:07:39.934608   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 19:07:39.934630   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 19:07:39.934651   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 19:07:39.934672   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 19:07:39.934691   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 19:07:39.934755   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 19:07:39.934775   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 19:07:39.934857   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 19:07:39.934907   31924 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 19:07:39.934921   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 19:07:39.934967   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 19:07:39.935020   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 19:07:39.935052   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 19:07:39.935114   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:07:39.935157   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:07:39.935181   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 19:07:39.935199   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 19:07:39.935843   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 19:07:39.960784   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 19:07:39.984252   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 19:07:40.035029   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 19:07:40.113353   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 19:07:40.150748   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 19:07:40.197845   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 19:07:40.223656   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 19:07:40.255793   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 19:07:40.292028   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 19:07:40.330529   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 19:07:40.364569   31924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 19:07:40.392780   31924 ssh_runner.go:195] Run: openssl version
	I0818 19:07:40.399027   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 19:07:40.417718   31924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:07:40.422630   31924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:07:40.422669   31924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:07:40.428925   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 19:07:40.437895   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 19:07:40.448159   31924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 19:07:40.452514   31924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:07:40.452555   31924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 19:07:40.457996   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 19:07:40.466836   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 19:07:40.477133   31924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 19:07:40.481994   31924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:07:40.482035   31924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 19:07:40.487558   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 19:07:40.496568   31924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:07:40.501015   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 19:07:40.506578   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 19:07:40.512267   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 19:07:40.517630   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 19:07:40.523626   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 19:07:40.528832   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 19:07:40.534232   31924 kubeadm.go:392] StartCluster: {Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:07:40.534391   31924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 19:07:40.534450   31924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 19:07:40.570633   31924 cri.go:89] found id: "59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70"
	I0818 19:07:40.570652   31924 cri.go:89] found id: "fb12fc4ec25af08df46345660895d418672cbe1d7db1e593dcd597d263ca8c49"
	I0818 19:07:40.570656   31924 cri.go:89] found id: "7544f3da7d94ec0d1b3fea670c2fdf21384f845a3aba7f87c8df32f49453d08c"
	I0818 19:07:40.570659   31924 cri.go:89] found id: "4b5799e37ce1ed7e355adf8ea7403c2a7f3d4f154a5276d6c9b71220c63e2e61"
	I0818 19:07:40.570662   31924 cri.go:89] found id: "fc3542516c1910d46d9dae2b65572cb275ab4eb3f0640acf0110d44193161c4f"
	I0818 19:07:40.570665   31924 cri.go:89] found id: "181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2"
	I0818 19:07:40.570667   31924 cri.go:89] found id: "f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107"
	I0818 19:07:40.570669   31924 cri.go:89] found id: "197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6"
	I0818 19:07:40.570672   31924 cri.go:89] found id: "d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d"
	I0818 19:07:40.570677   31924 cri.go:89] found id: "f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9"
	I0818 19:07:40.570679   31924 cri.go:89] found id: "79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125"
	I0818 19:07:40.570682   31924 cri.go:89] found id: "972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0"
	I0818 19:07:40.570685   31924 cri.go:89] found id: "8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058"
	I0818 19:07:40.570688   31924 cri.go:89] found id: "2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036"
	I0818 19:07:40.570692   31924 cri.go:89] found id: ""
	I0818 19:07:40.570730   31924 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 18 19:10:44 ha-189125 crio[3711]: time="2024-08-18 19:10:44.940440888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008244940415295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae79e7e2-0aa4-468a-a3ad-571f25c79577 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:10:44 ha-189125 crio[3711]: time="2024-08-18 19:10:44.940886945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80ed29de-204a-4a93-995a-7a966aa49c3a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:44 ha-189125 crio[3711]: time="2024-08-18 19:10:44.940946553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80ed29de-204a-4a93-995a-7a966aa49c3a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:44 ha-189125 crio[3711]: time="2024-08-18 19:10:44.941421442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355ce87e30385ee6ebe57147018b9dcef3cebb9c14cf6115f6c958193bb4673e,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724008151505840224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724008112502594529,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724008105503005864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d921b23237765708d696d48a2b78d70e5573a279dd5aa8c1c21b3e87f144be,PodSandboxId:c624d083cfe8b1918684e13f427d19717512f77a8b8dd1cbc946119d91dcc4ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724008098865959919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc77b3d0c32bba19cf0f4552096d3a4bd50cca218c4ad3c468ff779fcbdaea05,PodSandboxId:2e17b9ce366791ccbe7be90fac891b3ee72587a6258545530b732532c2cb3a60,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724008079526716639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd66e844c8c1cf0bca8571443427a34e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320,PodSandboxId:bc82656318d897233bf510f47fbafdcf69a77c16ce67a451201b1a6f5f105c89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724008065531808934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d6e5e1d924faa0cbaf2a98331cba44192754b1cda5c19934fea840d5b640d326,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724008065666525967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c,PodSandboxId:bc6acbfa3e23e5510924996ef360ef90c6fe6ebfd49e61957a1c21779571feee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724008065502418956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0201fb0f
3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3,PodSandboxId:a20694505a6e78eb5c262d833a9d46d3cda4ac689f3f9adde8785e843e4e5df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724008065385558143,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6eabe6cb2b27456043492a7a17f7a
a98a7ace76e092cb01972aebd6beca960f,PodSandboxId:c99f76b154ff2d0efeb49b2d69bc06c09e5780eec60473697885123776031967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008065434817846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724008065276873163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724008065267031643,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3,PodSandboxId:631f0bdb59f802be42fbd3ac58ebcb78e5061a0cabcf4e75d0f1b0107762443d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724008065217000978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70,PodSandboxId:ac3b59788a4b90c3842cd67f36786f6348d794e4843874b697034f2559e98b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008060250386766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724007567164702589,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379297354682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379300776150,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724007367338619690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724007363376537147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724007351593272153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724007351506952426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80ed29de-204a-4a93-995a-7a966aa49c3a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:44 ha-189125 crio[3711]: time="2024-08-18 19:10:44.996973642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f664b063-94aa-425e-9181-d2336f54583f name=/runtime.v1.RuntimeService/Version
	Aug 18 19:10:44 ha-189125 crio[3711]: time="2024-08-18 19:10:44.997134595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f664b063-94aa-425e-9181-d2336f54583f name=/runtime.v1.RuntimeService/Version
	Aug 18 19:10:44 ha-189125 crio[3711]: time="2024-08-18 19:10:44.998601125Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56f8238f-c11d-4304-895d-a32062196d7f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:10:44 ha-189125 crio[3711]: time="2024-08-18 19:10:44.999319056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008244999287681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56f8238f-c11d-4304-895d-a32062196d7f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.000534838Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b686cb9-93a0-4c09-b569-de15037a3896 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.000610065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b686cb9-93a0-4c09-b569-de15037a3896 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.001218207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355ce87e30385ee6ebe57147018b9dcef3cebb9c14cf6115f6c958193bb4673e,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724008151505840224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724008112502594529,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724008105503005864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d921b23237765708d696d48a2b78d70e5573a279dd5aa8c1c21b3e87f144be,PodSandboxId:c624d083cfe8b1918684e13f427d19717512f77a8b8dd1cbc946119d91dcc4ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724008098865959919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc77b3d0c32bba19cf0f4552096d3a4bd50cca218c4ad3c468ff779fcbdaea05,PodSandboxId:2e17b9ce366791ccbe7be90fac891b3ee72587a6258545530b732532c2cb3a60,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724008079526716639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd66e844c8c1cf0bca8571443427a34e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320,PodSandboxId:bc82656318d897233bf510f47fbafdcf69a77c16ce67a451201b1a6f5f105c89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724008065531808934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d6e5e1d924faa0cbaf2a98331cba44192754b1cda5c19934fea840d5b640d326,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724008065666525967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c,PodSandboxId:bc6acbfa3e23e5510924996ef360ef90c6fe6ebfd49e61957a1c21779571feee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724008065502418956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0201fb0f
3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3,PodSandboxId:a20694505a6e78eb5c262d833a9d46d3cda4ac689f3f9adde8785e843e4e5df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724008065385558143,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6eabe6cb2b27456043492a7a17f7a
a98a7ace76e092cb01972aebd6beca960f,PodSandboxId:c99f76b154ff2d0efeb49b2d69bc06c09e5780eec60473697885123776031967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008065434817846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724008065276873163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724008065267031643,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3,PodSandboxId:631f0bdb59f802be42fbd3ac58ebcb78e5061a0cabcf4e75d0f1b0107762443d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724008065217000978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70,PodSandboxId:ac3b59788a4b90c3842cd67f36786f6348d794e4843874b697034f2559e98b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008060250386766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724007567164702589,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379297354682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379300776150,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724007367338619690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724007363376537147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724007351593272153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724007351506952426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b686cb9-93a0-4c09-b569-de15037a3896 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.060927401Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9d14b78-a7af-448a-b5da-45afb60cbaca name=/runtime.v1.RuntimeService/Version
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.061020987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9d14b78-a7af-448a-b5da-45afb60cbaca name=/runtime.v1.RuntimeService/Version
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.062552988Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7dfda094-72b1-4aa7-b0fe-2d63b636155d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.063548303Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008245063520737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7dfda094-72b1-4aa7-b0fe-2d63b636155d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.064480599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fee94df4-2cea-4a9b-8506-ea5652f9dd32 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.064553646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fee94df4-2cea-4a9b-8506-ea5652f9dd32 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.065033368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355ce87e30385ee6ebe57147018b9dcef3cebb9c14cf6115f6c958193bb4673e,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724008151505840224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724008112502594529,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724008105503005864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d921b23237765708d696d48a2b78d70e5573a279dd5aa8c1c21b3e87f144be,PodSandboxId:c624d083cfe8b1918684e13f427d19717512f77a8b8dd1cbc946119d91dcc4ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724008098865959919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc77b3d0c32bba19cf0f4552096d3a4bd50cca218c4ad3c468ff779fcbdaea05,PodSandboxId:2e17b9ce366791ccbe7be90fac891b3ee72587a6258545530b732532c2cb3a60,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724008079526716639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd66e844c8c1cf0bca8571443427a34e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320,PodSandboxId:bc82656318d897233bf510f47fbafdcf69a77c16ce67a451201b1a6f5f105c89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724008065531808934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d6e5e1d924faa0cbaf2a98331cba44192754b1cda5c19934fea840d5b640d326,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724008065666525967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c,PodSandboxId:bc6acbfa3e23e5510924996ef360ef90c6fe6ebfd49e61957a1c21779571feee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724008065502418956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0201fb0f
3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3,PodSandboxId:a20694505a6e78eb5c262d833a9d46d3cda4ac689f3f9adde8785e843e4e5df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724008065385558143,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6eabe6cb2b27456043492a7a17f7a
a98a7ace76e092cb01972aebd6beca960f,PodSandboxId:c99f76b154ff2d0efeb49b2d69bc06c09e5780eec60473697885123776031967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008065434817846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724008065276873163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724008065267031643,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3,PodSandboxId:631f0bdb59f802be42fbd3ac58ebcb78e5061a0cabcf4e75d0f1b0107762443d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724008065217000978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70,PodSandboxId:ac3b59788a4b90c3842cd67f36786f6348d794e4843874b697034f2559e98b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008060250386766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724007567164702589,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379297354682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379300776150,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724007367338619690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724007363376537147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724007351593272153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724007351506952426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fee94df4-2cea-4a9b-8506-ea5652f9dd32 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.111747387Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94d66bfd-b4d5-48db-acb8-7bc698f7980f name=/runtime.v1.RuntimeService/Version
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.111837312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94d66bfd-b4d5-48db-acb8-7bc698f7980f name=/runtime.v1.RuntimeService/Version
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.113314774Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43d57a28-0595-4f74-bd03-cf610bb38d92 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.113768377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008245113744940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43d57a28-0595-4f74-bd03-cf610bb38d92 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.114410162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a34c066d-3d2e-46ac-969c-7d34d421e778 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.114467191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a34c066d-3d2e-46ac-969c-7d34d421e778 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:10:45 ha-189125 crio[3711]: time="2024-08-18 19:10:45.117272778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355ce87e30385ee6ebe57147018b9dcef3cebb9c14cf6115f6c958193bb4673e,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724008151505840224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724008112502594529,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724008105503005864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d921b23237765708d696d48a2b78d70e5573a279dd5aa8c1c21b3e87f144be,PodSandboxId:c624d083cfe8b1918684e13f427d19717512f77a8b8dd1cbc946119d91dcc4ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724008098865959919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc77b3d0c32bba19cf0f4552096d3a4bd50cca218c4ad3c468ff779fcbdaea05,PodSandboxId:2e17b9ce366791ccbe7be90fac891b3ee72587a6258545530b732532c2cb3a60,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724008079526716639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd66e844c8c1cf0bca8571443427a34e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320,PodSandboxId:bc82656318d897233bf510f47fbafdcf69a77c16ce67a451201b1a6f5f105c89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724008065531808934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d6e5e1d924faa0cbaf2a98331cba44192754b1cda5c19934fea840d5b640d326,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724008065666525967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c,PodSandboxId:bc6acbfa3e23e5510924996ef360ef90c6fe6ebfd49e61957a1c21779571feee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724008065502418956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0201fb0f
3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3,PodSandboxId:a20694505a6e78eb5c262d833a9d46d3cda4ac689f3f9adde8785e843e4e5df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724008065385558143,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6eabe6cb2b27456043492a7a17f7a
a98a7ace76e092cb01972aebd6beca960f,PodSandboxId:c99f76b154ff2d0efeb49b2d69bc06c09e5780eec60473697885123776031967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008065434817846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724008065276873163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724008065267031643,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3,PodSandboxId:631f0bdb59f802be42fbd3ac58ebcb78e5061a0cabcf4e75d0f1b0107762443d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724008065217000978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70,PodSandboxId:ac3b59788a4b90c3842cd67f36786f6348d794e4843874b697034f2559e98b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008060250386766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724007567164702589,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379297354682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379300776150,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724007367338619690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724007363376537147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724007351593272153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724007351506952426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a34c066d-3d2e-46ac-969c-7d34d421e778 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	355ce87e30385       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   24016516664de       storage-provisioner
	b4bdc19004cc6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   0a7968b9836c3       kube-apiserver-ha-189125
	391bbfbf5a674       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Running             kube-controller-manager   2                   5a09bb72754f2       kube-controller-manager-ha-189125
	30d921b232377       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   c624d083cfe8b       busybox-7dff88458-kxdwj
	fc77b3d0c32bb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   2e17b9ce36679       kube-vip-ha-189125
	d6e5e1d924faa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   24016516664de       storage-provisioner
	2e5bf40065d4e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   bc82656318d89       kube-proxy-96xwx
	d282d86ace46d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   bc6acbfa3e23e       kindnet-jwxjh
	d6eabe6cb2b27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   c99f76b154ff2       coredns-6f6b679f8f-q9j97
	e0201fb0f3e91       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   a20694505a6e7       kube-scheduler-ha-189125
	a9d10442a1782       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   0a7968b9836c3       kube-apiserver-ha-189125
	ce8ceea70f09e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   5a09bb72754f2       kube-controller-manager-ha-189125
	504747e4441ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago        Running             etcd                      1                   631f0bdb59f80       etcd-ha-189125
	59c8ad3c2bd21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   ac3b59788a4b9       coredns-6f6b679f8f-7xr26
	1cbf1a420990c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   8cdf7a8433c4d       busybox-7dff88458-kxdwj
	181bcd36f89b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   c4e0fe307dc97       coredns-6f6b679f8f-q9j97
	f095c1d3ba818       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   0e090955bb301       coredns-6f6b679f8f-7xr26
	197dd2bffa6c8       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    14 minutes ago       Exited              kindnet-cni               0                   c93b973b05129       kindnet-jwxjh
	d3f078fad6871       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      14 minutes ago       Exited              kube-proxy                0                   c28cd1212a8c0       kube-proxy-96xwx
	79fc87641651d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      14 minutes ago       Exited              etcd                      0                   b20bbedf6c011       etcd-ha-189125
	8eb7a6513c9b9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      14 minutes ago       Exited              kube-scheduler            0                   6fe0bbacb48d2       kube-scheduler-ha-189125
	
	
	==> coredns [181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2] <==
	[INFO] 10.244.2.2:56571 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00135054s
	[INFO] 10.244.2.2:43437 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086979s
	[INFO] 10.244.0.4:53861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002025942s
	[INFO] 10.244.0.4:36847 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001326246s
	[INFO] 10.244.0.4:36223 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073856s
	[INFO] 10.244.0.4:53397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051079s
	[INFO] 10.244.0.4:60257 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077527s
	[INFO] 10.244.1.2:36105 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142033s
	[INFO] 10.244.2.2:43159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120043s
	[INFO] 10.244.2.2:48451 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105513s
	[INFO] 10.244.2.2:40617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090209s
	[INFO] 10.244.2.2:53467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079345s
	[INFO] 10.244.0.4:34375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009177s
	[INFO] 10.244.0.4:47256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098542s
	[INFO] 10.244.0.4:38739 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087517s
	[INFO] 10.244.1.2:44329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157424s
	[INFO] 10.244.1.2:52970 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000328904s
	[INFO] 10.244.2.2:35139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010364s
	[INFO] 10.244.2.2:51553 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000143049s
	[INFO] 10.244.0.4:55737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097209s
	[INFO] 10.244.0.4:56754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040314s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1952&timeout=6m56s&timeoutSeconds=416&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1950&timeout=7m25s&timeoutSeconds=445&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[127350764]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:07:48.738) (total time: 10001ms):
	Trace[127350764]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:07:58.739)
	Trace[127350764]: [10.001416775s] [10.001416775s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37038->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37038->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50170->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50170->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d6eabe6cb2b27456043492a7a17f7aa98a7ace76e092cb01972aebd6beca960f] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:38038->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:38038->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38014->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[964636579]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:07:57.255) (total time: 10792ms):
	Trace[964636579]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38014->10.96.0.1:443: read: connection reset by peer 10792ms (19:08:08.048)
	Trace[964636579]: [10.792609462s] [10.792609462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38014->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107] <==
	[INFO] 10.244.0.4:50813 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001489232s
	[INFO] 10.244.1.2:44640 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003618953s
	[INFO] 10.244.1.2:37984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161286s
	[INFO] 10.244.2.2:55904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150006s
	[INFO] 10.244.2.2:38276 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00189507s
	[INFO] 10.244.2.2:42054 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179179s
	[INFO] 10.244.2.2:35911 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190164s
	[INFO] 10.244.2.2:52357 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163157s
	[INFO] 10.244.0.4:38374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136266s
	[INFO] 10.244.0.4:33983 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103666s
	[INFO] 10.244.0.4:42233 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069982s
	[INFO] 10.244.1.2:39502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134749s
	[INFO] 10.244.1.2:38715 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102799s
	[INFO] 10.244.1.2:55122 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135608s
	[INFO] 10.244.0.4:56934 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000488s
	[INFO] 10.244.1.2:45200 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251667s
	[INFO] 10.244.1.2:35239 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131205s
	[INFO] 10.244.2.2:47108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152092s
	[INFO] 10.244.2.2:45498 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093397s
	[INFO] 10.244.0.4:52889 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059058s
	[INFO] 10.244.0.4:55998 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042989s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1952&timeout=9m4s&timeoutSeconds=544&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1952&timeout=5m40s&timeoutSeconds=340&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-189125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T18_55_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:55:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:10:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:08:31 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:08:31 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:08:31 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:08:31 +0000   Sun, 18 Aug 2024 18:56:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.49
	  Hostname:    ha-189125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9520f8bfe7ab47fca640aa213dbc51c5
	  System UUID:                9520f8bf-e7ab-47fc-a640-aa213dbc51c5
	  Boot ID:                    d5000132-c81a-4416-b5cd-bc4cc58a7c4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kxdwj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-7xr26             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-q9j97             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-189125                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-jwxjh                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-189125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-189125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-96xwx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-189125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-189125                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 2m16s                 kube-proxy       
	  Normal   Starting                 14m                   kube-proxy       
	  Normal   NodeHasSufficientMemory  14m                   kubelet          Node ha-189125 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                   kubelet          Node ha-189125 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                   kubelet          Node ha-189125 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           14m                   node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal   NodeReady                14m                   kubelet          Node ha-189125 status is now: NodeReady
	  Normal   RegisteredNode           12m                   node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal   RegisteredNode           11m                   node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Warning  ContainerGCFailed        3m48s                 kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m4s (x3 over 3m53s)  kubelet          Node ha-189125 status is now: NodeNotReady
	  Normal   RegisteredNode           2m22s                 node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal   RegisteredNode           2m8s                  node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal   RegisteredNode           42s                   node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	
	
	Name:               ha-189125-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T18_57_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:57:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:10:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:09:10 +0000   Sun, 18 Aug 2024 19:08:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:09:10 +0000   Sun, 18 Aug 2024 19:08:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:09:10 +0000   Sun, 18 Aug 2024 19:08:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:09:10 +0000   Sun, 18 Aug 2024 19:08:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    ha-189125-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3324dc2b927f496881437c52ed831dff
	  System UUID:                3324dc2b-927f-4968-8143-7c52ed831dff
	  Boot ID:                    4823ca56-5a42-4c8c-8af0-f183e470fe0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8bwfj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-189125-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-qhnpv                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-189125-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-189125-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-scwlr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-189125-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-189125-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                    node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-189125-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-189125-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-189125-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  NodeNotReady             9m29s                  node-controller  Node ha-189125-m02 status is now: NodeNotReady
	  Normal  Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m43s (x8 over 2m43s)  kubelet          Node ha-189125-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m43s (x8 over 2m43s)  kubelet          Node ha-189125-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m43s (x7 over 2m43s)  kubelet          Node ha-189125-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m22s                  node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  RegisteredNode           42s                    node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	
	
	Name:               ha-189125-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T18_58_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:58:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:10:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:10:13 +0000   Sun, 18 Aug 2024 19:09:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:10:13 +0000   Sun, 18 Aug 2024 19:09:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:10:13 +0000   Sun, 18 Aug 2024 19:09:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:10:13 +0000   Sun, 18 Aug 2024 19:09:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-189125-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d3ec6cee66841f19e0d0001d5bf49e3
	  System UUID:                4d3ec6ce-e668-41f1-9e0d-0001d5bf49e3
	  Boot ID:                    9499538a-d5f1-4da4-8d26-1455fe500d76
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fvdcn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-189125-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-24xql                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-189125-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-189125-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-22f8v                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-189125-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-189125-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 45s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-189125-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-189125-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-189125-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	  Normal   RegisteredNode           2m22s              node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	  Normal   RegisteredNode           2m8s               node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	  Normal   NodeNotReady             102s               node-controller  Node ha-189125-m03 status is now: NodeNotReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 62s (x2 over 62s)  kubelet          Node ha-189125-m03 has been rebooted, boot id: 9499538a-d5f1-4da4-8d26-1455fe500d76
	  Normal   NodeHasSufficientMemory  62s (x3 over 62s)  kubelet          Node ha-189125-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x3 over 62s)  kubelet          Node ha-189125-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x3 over 62s)  kubelet          Node ha-189125-m03 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             62s                kubelet          Node ha-189125-m03 status is now: NodeNotReady
	  Normal   NodeReady                62s                kubelet          Node ha-189125-m03 status is now: NodeReady
	  Normal   RegisteredNode           42s                node-controller  Node ha-189125-m03 event: Registered Node ha-189125-m03 in Controller
	
	
	Name:               ha-189125-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T19_00_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:00:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:10:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:10:37 +0000   Sun, 18 Aug 2024 19:10:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:10:37 +0000   Sun, 18 Aug 2024 19:10:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:10:37 +0000   Sun, 18 Aug 2024 19:10:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:10:37 +0000   Sun, 18 Aug 2024 19:10:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    ha-189125-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aaeec6aea01d4746832fda2dc541437c
	  System UUID:                aaeec6ae-a01d-4746-832f-da2dc541437c
	  Boot ID:                    86399eab-ba5f-4a2b-9081-1c1e40769c26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24hmx       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-krtg7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-189125-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-189125-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-189125-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-189125-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m22s              node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   RegisteredNode           2m8s               node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   NodeNotReady             102s               node-controller  Node ha-189125-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           42s                node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-189125-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-189125-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-189125-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-189125-m04 has been rebooted, boot id: 86399eab-ba5f-4a2b-9081-1c1e40769c26
	  Normal   NodeReady                8s                 kubelet          Node ha-189125-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.511172] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053311] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195743] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.133817] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.270401] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.027475] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +4.080385] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.059467] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140089] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.075123] kauditd_printk_skb: 79 callbacks suppressed
	[Aug18 18:56] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.023234] kauditd_printk_skb: 36 callbacks suppressed
	[Aug18 18:57] kauditd_printk_skb: 26 callbacks suppressed
	[Aug18 19:04] kauditd_printk_skb: 1 callbacks suppressed
	[Aug18 19:07] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
	[  +0.146860] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +0.178026] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.138747] systemd-fstab-generator[3668]: Ignoring "noauto" option for root device
	[  +0.292822] systemd-fstab-generator[3697]: Ignoring "noauto" option for root device
	[  +0.912666] systemd-fstab-generator[3796]: Ignoring "noauto" option for root device
	[  +5.310048] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.263628] kauditd_printk_skb: 75 callbacks suppressed
	[Aug18 19:08] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3] <==
	{"level":"warn","ts":"2024-08-18T19:09:40.297403Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.170:2380/version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:40.297469Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:41.267421Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5e74e6c9f0774ce1","rtt":"0s","error":"dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:41.267571Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5e74e6c9f0774ce1","rtt":"0s","error":"dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:44.299999Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.170:2380/version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:44.300257Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:46.268550Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5e74e6c9f0774ce1","rtt":"0s","error":"dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:46.268688Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5e74e6c9f0774ce1","rtt":"0s","error":"dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:48.302869Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.170:2380/version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:48.302989Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:51.269433Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5e74e6c9f0774ce1","rtt":"0s","error":"dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:51.269457Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5e74e6c9f0774ce1","rtt":"0s","error":"dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:52.305367Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.170:2380/version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:52.305441Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:56.270126Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"5e74e6c9f0774ce1","rtt":"0s","error":"dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:56.270158Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"5e74e6c9f0774ce1","rtt":"0s","error":"dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:56.307765Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.170:2380/version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-18T19:09:56.307909Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"5e74e6c9f0774ce1","error":"Get \"https://192.168.39.170:2380/version\": dial tcp 192.168.39.170:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-18T19:09:57.022382Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:09:57.022460Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:09:57.025030Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:09:57.054620Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7f2a407b6bb4eb12","to":"5e74e6c9f0774ce1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-18T19:09:57.054683Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:09:57.067248Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7f2a407b6bb4eb12","to":"5e74e6c9f0774ce1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-18T19:09:57.067308Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	
	
	==> etcd [79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125] <==
	{"level":"warn","ts":"2024-08-18T19:06:06.482856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:06:05.905375Z","time spent":"577.478607ms","remote":"127.0.0.1:57982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:10000 "}
	2024/08/18 19:06:06 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:06:06.745825Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.49:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:06:06.745884Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.49:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:06:06.746053Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7f2a407b6bb4eb12","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-18T19:06:06.746358Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746406Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746457Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746577Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746639Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746688Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746716Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746753Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.746780Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.746819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.746910Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.746956Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.747002Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.747029Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.750920Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.49:2380"}
	{"level":"warn","ts":"2024-08-18T19:06:06.751038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.85943136s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-18T19:06:06.751181Z","caller":"traceutil/trace.go:171","msg":"trace[1867387594] range","detail":"{range_begin:; range_end:; }","duration":"8.859594342s","start":"2024-08-18T19:05:57.891576Z","end":"2024-08-18T19:06:06.751170Z","steps":["trace[1867387594] 'agreement among raft nodes before linearized reading'  (duration: 8.859427951s)"],"step_count":1}
	{"level":"error","ts":"2024-08-18T19:06:06.751301Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-18T19:06:06.751060Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.49:2380"}
	{"level":"info","ts":"2024-08-18T19:06:06.751464Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-189125","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.49:2380"],"advertise-client-urls":["https://192.168.39.49:2379"]}
	
	
	==> kernel <==
	 19:10:45 up 15 min,  0 users,  load average: 0.49, 0.53, 0.30
	Linux ha-189125 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6] <==
	I0818 19:05:28.445052       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:05:38.445280       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:05:38.445397       1 main.go:299] handling current node
	I0818 19:05:38.445431       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:05:38.445450       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:05:38.445656       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:05:38.445685       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:05:38.445753       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:05:38.445771       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:05:48.446996       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:05:48.447215       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:05:48.447437       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:05:48.447494       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:05:48.447630       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:05:48.447687       1 main.go:299] handling current node
	I0818 19:05:48.447726       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:05:48.447801       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:05:58.452044       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:05:58.452153       1 main.go:299] handling current node
	I0818 19:05:58.452174       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:05:58.452180       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:05:58.452360       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:05:58.452386       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:05:58.452466       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:05:58.452486       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c] <==
	I0818 19:10:06.675810       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:10:16.675944       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:10:16.676069       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:10:16.676379       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:10:16.676418       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:10:16.676509       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:10:16.676518       1 main.go:299] handling current node
	I0818 19:10:16.676546       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:10:16.676575       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:10:26.674947       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:10:26.675211       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:10:26.675455       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:10:26.675485       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:10:26.675581       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:10:26.675601       1 main.go:299] handling current node
	I0818 19:10:26.675644       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:10:26.675674       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:10:36.674852       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:10:36.674913       1 main.go:299] handling current node
	I0818 19:10:36.674928       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:10:36.674934       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:10:36.675189       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:10:36.675214       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:10:36.675297       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:10:36.675318       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0] <==
	I0818 19:07:45.812312       1 options.go:228] external host was not specified, using 192.168.39.49
	I0818 19:07:45.825316       1 server.go:142] Version: v1.31.0
	I0818 19:07:45.825520       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:07:46.954894       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0818 19:07:46.965923       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:07:46.969890       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0818 19:07:46.969957       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0818 19:07:46.970219       1 instance.go:232] Using reconciler: lease
	W0818 19:08:06.952534       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0818 19:08:06.952539       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0818 19:08:06.971359       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e] <==
	I0818 19:08:34.354802       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0818 19:08:34.353390       1 controller.go:142] Starting OpenAPI controller
	I0818 19:08:34.368666       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0818 19:08:34.368794       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:08:34.423464       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:08:34.423569       1 policy_source.go:224] refreshing policies
	I0818 19:08:34.441187       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 19:08:34.452332       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 19:08:34.452440       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0818 19:08:34.452474       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 19:08:34.453718       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 19:08:34.453809       1 aggregator.go:171] initial CRD sync complete...
	I0818 19:08:34.453826       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 19:08:34.453830       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 19:08:34.453834       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:08:34.453981       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 19:08:34.454565       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0818 19:08:34.454859       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 19:08:34.454876       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0818 19:08:34.460453       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 19:08:34.508993       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 19:08:35.361915       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0818 19:08:35.672650       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.147 192.168.39.49]
	I0818 19:08:35.674224       1 controller.go:615] quota admission added evaluator for: endpoints
	I0818 19:08:35.680477       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432] <==
	I0818 19:09:03.088337       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:09:03.145164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="28.71043ms"
	I0818 19:09:03.145337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.958µs"
	I0818 19:09:07.629636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m03"
	I0818 19:09:08.315537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m03"
	I0818 19:09:09.051344       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-n594w EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-n594w\": the object has been modified; please apply your changes to the latest version and try again"
	I0818 19:09:09.054321       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"7fc217fe-b0f9-4c4d-9fa0-d9a1b698d55e", APIVersion:"v1", ResourceVersion:"248", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-n594w EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-n594w": the object has been modified; please apply your changes to the latest version and try again
	I0818 19:09:09.082273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="54.409799ms"
	I0818 19:09:09.082831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="200.608µs"
	I0818 19:09:10.856413       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m02"
	I0818 19:09:17.714938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:09:18.397018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:09:43.226644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m03"
	I0818 19:09:43.245687       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m03"
	I0818 19:09:43.287264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m03"
	I0818 19:09:44.176450       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.015µs"
	I0818 19:10:03.352790       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:10:03.422781       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:10:07.562209       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.58124ms"
	I0818 19:10:07.562576       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.113µs"
	I0818 19:10:13.812834       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m03"
	I0818 19:10:37.678564       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-189125-m04"
	I0818 19:10:37.678982       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:10:37.696302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:10:38.316693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	
	
	==> kube-controller-manager [ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c] <==
	I0818 19:07:46.785450       1 serving.go:386] Generated self-signed cert in-memory
	I0818 19:07:47.186390       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0818 19:07:47.186491       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:07:47.188272       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0818 19:07:47.188922       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:07:47.189169       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:07:47.189277       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0818 19:08:07.976607       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.49:8443/healthz\": dial tcp 192.168.39.49:8443: connect: connection refused"
	
	
	==> kube-proxy [2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:07:50.191633       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0818 19:07:53.263872       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0818 19:07:56.336461       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0818 19:08:02.480780       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0818 19:08:11.695765       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0818 19:08:28.673396       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.49"]
	E0818 19:08:28.673542       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:08:28.726596       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:08:28.726672       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:08:28.726713       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:08:28.729485       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:08:28.730045       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:08:28.730126       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:08:28.731941       1 config.go:197] "Starting service config controller"
	I0818 19:08:28.732001       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:08:28.732042       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:08:28.732063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:08:28.733706       1 config.go:326] "Starting node config controller"
	I0818 19:08:28.733732       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:08:28.832679       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:08:28.832752       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:08:28.833821       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d] <==
	E0818 19:04:48.751475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:48.751549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:48.751585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:48.751700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:48.751751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:55.087513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:55.087609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:55.087704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:55.087742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:58.159575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:58.159649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:04.305152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:04.305338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:07.377040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:07.377177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:10.448718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:10.448835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:22.735833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:22.736963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:28.880688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:28.881025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:35.024375       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:35.024590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:06:05.743666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:06:05.743763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058] <==
	E0818 18:58:54.898809       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24xql\": pod kindnet-24xql is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-24xql" node="ha-189125-m03"
	E0818 18:58:54.898876       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba1034b3-04c9-4c64-8fde-7b45ea42f21c(kube-system/kindnet-24xql) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24xql"
	E0818 18:58:54.898900       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24xql\": pod kindnet-24xql is already assigned to node \"ha-189125-m03\"" pod="kube-system/kindnet-24xql"
	I0818 18:58:54.898918       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24xql" node="ha-189125-m03"
	E0818 18:59:23.602753       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8bwfj\": pod busybox-7dff88458-8bwfj is already assigned to node \"ha-189125-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8bwfj" node="ha-189125-m02"
	E0818 18:59:23.602879       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8bwfj\": pod busybox-7dff88458-8bwfj is already assigned to node \"ha-189125-m02\"" pod="default/busybox-7dff88458-8bwfj"
	E0818 18:59:23.652419       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fvdcn\": pod busybox-7dff88458-fvdcn is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fvdcn" node="ha-189125-m03"
	E0818 18:59:23.652848       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 19fc5686-7021-4b6f-a097-71f7b6d6a76e(default/busybox-7dff88458-fvdcn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fvdcn"
	E0818 18:59:23.652953       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fvdcn\": pod busybox-7dff88458-fvdcn is already assigned to node \"ha-189125-m03\"" pod="default/busybox-7dff88458-fvdcn"
	I0818 18:59:23.653004       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fvdcn" node="ha-189125-m03"
	E0818 18:59:23.653552       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kxdwj\": pod busybox-7dff88458-kxdwj is already assigned to node \"ha-189125\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-kxdwj" node="ha-189125"
	E0818 18:59:23.655579       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e2ebdc21-75ca-43ac-86f2-7c492eefe97d(default/busybox-7dff88458-kxdwj) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-kxdwj"
	E0818 18:59:23.655718       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kxdwj\": pod busybox-7dff88458-kxdwj is already assigned to node \"ha-189125\"" pod="default/busybox-7dff88458-kxdwj"
	I0818 18:59:23.655773       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-kxdwj" node="ha-189125"
	E0818 19:05:57.627368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0818 19:05:57.667171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0818 19:05:58.307794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0818 19:05:58.761376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0818 19:05:59.522211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0818 19:05:59.626204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0818 19:06:00.175363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0818 19:06:03.437897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0818 19:06:04.315214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0818 19:06:05.316192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0818 19:06:06.462220       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e0201fb0f3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3] <==
	W0818 19:08:24.327485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.49:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:24.327616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.49:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:24.920737       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.49:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:24.920811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.49:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:25.884567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.49:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:25.884652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.49:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:27.018018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.49:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:27.018158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.49:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:27.056979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.49:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:27.057133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.49:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:27.546667       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.49:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:27.546806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.49:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:27.773556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.49:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:27.773670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.49:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:28.269507       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.49:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:28.269616       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.49:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:28.508870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.49:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:28.508938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.49:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:29.006555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.49:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:29.006618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.49:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:29.920886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.49:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:29.920985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.49:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:30.450767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.49:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:30.450875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.49:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	I0818 19:08:52.088190       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 19:09:12 ha-189125 kubelet[1332]: I0818 19:09:12.509145    1332 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-189125"
	Aug 18 19:09:16 ha-189125 kubelet[1332]: I0818 19:09:16.159155    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-kxdwj" podStartSLOduration=590.248340311 podStartE2EDuration="9m53.159075487s" podCreationTimestamp="2024-08-18 18:59:23 +0000 UTC" firstStartedPulling="2024-08-18 18:59:24.237628157 +0000 UTC m=+206.915555899" lastFinishedPulling="2024-08-18 18:59:27.148363333 +0000 UTC m=+209.826291075" observedRunningTime="2024-08-18 18:59:27.346525153 +0000 UTC m=+210.024452916" watchObservedRunningTime="2024-08-18 19:09:16.159075487 +0000 UTC m=+798.837003229"
	Aug 18 19:09:17 ha-189125 kubelet[1332]: E0818 19:09:17.716602    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008157716256904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:09:17 ha-189125 kubelet[1332]: E0818 19:09:17.716646    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008157716256904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:09:27 ha-189125 kubelet[1332]: E0818 19:09:27.723140    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008167721643693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:09:27 ha-189125 kubelet[1332]: E0818 19:09:27.723653    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008167721643693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:09:37 ha-189125 kubelet[1332]: E0818 19:09:37.726699    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008177726213435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:09:37 ha-189125 kubelet[1332]: E0818 19:09:37.727238    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008177726213435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:09:47 ha-189125 kubelet[1332]: E0818 19:09:47.730224    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008187729534569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:09:47 ha-189125 kubelet[1332]: E0818 19:09:47.730271    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008187729534569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:09:57 ha-189125 kubelet[1332]: E0818 19:09:57.543665    1332 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:09:57 ha-189125 kubelet[1332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:09:57 ha-189125 kubelet[1332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:09:57 ha-189125 kubelet[1332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:09:57 ha-189125 kubelet[1332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:09:57 ha-189125 kubelet[1332]: E0818 19:09:57.733225    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008197732693781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:09:57 ha-189125 kubelet[1332]: E0818 19:09:57.733263    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008197732693781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:10:07 ha-189125 kubelet[1332]: E0818 19:10:07.735251    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008207734866422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:10:07 ha-189125 kubelet[1332]: E0818 19:10:07.735769    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008207734866422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:10:17 ha-189125 kubelet[1332]: E0818 19:10:17.737540    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008217737150473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:10:17 ha-189125 kubelet[1332]: E0818 19:10:17.737583    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008217737150473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:10:27 ha-189125 kubelet[1332]: E0818 19:10:27.739064    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008227738733998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:10:27 ha-189125 kubelet[1332]: E0818 19:10:27.739156    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008227738733998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:10:37 ha-189125 kubelet[1332]: E0818 19:10:37.741872    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008237741490328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:10:37 ha-189125 kubelet[1332]: E0818 19:10:37.742263    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008237741490328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:10:44.615328   33398 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-7747/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-189125 -n ha-189125
helpers_test.go:261: (dbg) Run:  kubectl --context ha-189125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (403.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 stop -v=7 --alsologtostderr
E0818 19:11:44.019465   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 stop -v=7 --alsologtostderr: exit status 82 (2m0.461880832s)

                                                
                                                
-- stdout --
	* Stopping node "ha-189125-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:11:04.199479   33809 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:11:04.199598   33809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:11:04.199609   33809 out.go:358] Setting ErrFile to fd 2...
	I0818 19:11:04.199616   33809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:11:04.199793   33809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:11:04.200040   33809 out.go:352] Setting JSON to false
	I0818 19:11:04.200148   33809 mustload.go:65] Loading cluster: ha-189125
	I0818 19:11:04.200527   33809 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:11:04.200621   33809 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 19:11:04.200810   33809 mustload.go:65] Loading cluster: ha-189125
	I0818 19:11:04.200988   33809 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:11:04.201030   33809 stop.go:39] StopHost: ha-189125-m04
	I0818 19:11:04.201418   33809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:11:04.201472   33809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:11:04.216047   33809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40061
	I0818 19:11:04.216479   33809 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:11:04.217026   33809 main.go:141] libmachine: Using API Version  1
	I0818 19:11:04.217050   33809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:11:04.217403   33809 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:11:04.220133   33809 out.go:177] * Stopping node "ha-189125-m04"  ...
	I0818 19:11:04.221617   33809 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 19:11:04.221651   33809 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:11:04.221915   33809 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 19:11:04.221942   33809 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:11:04.224927   33809 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:11:04.225466   33809 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 20:10:32 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:11:04.225504   33809 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:11:04.225704   33809 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:11:04.225920   33809 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:11:04.226085   33809 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:11:04.226240   33809 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	I0818 19:11:04.314552   33809 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0818 19:11:04.368662   33809 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0818 19:11:04.421550   33809 main.go:141] libmachine: Stopping "ha-189125-m04"...
	I0818 19:11:04.421611   33809 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:11:04.423042   33809 main.go:141] libmachine: (ha-189125-m04) Calling .Stop
	I0818 19:11:04.426110   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 0/120
	I0818 19:11:05.427569   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 1/120
	I0818 19:11:06.428850   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 2/120
	I0818 19:11:07.430179   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 3/120
	I0818 19:11:08.431507   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 4/120
	I0818 19:11:09.433054   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 5/120
	I0818 19:11:10.434578   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 6/120
	I0818 19:11:11.435834   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 7/120
	I0818 19:11:12.437094   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 8/120
	I0818 19:11:13.438209   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 9/120
	I0818 19:11:14.440391   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 10/120
	I0818 19:11:15.441714   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 11/120
	I0818 19:11:16.443019   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 12/120
	I0818 19:11:17.444543   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 13/120
	I0818 19:11:18.446844   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 14/120
	I0818 19:11:19.448529   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 15/120
	I0818 19:11:20.450631   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 16/120
	I0818 19:11:21.452696   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 17/120
	I0818 19:11:22.454132   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 18/120
	I0818 19:11:23.455681   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 19/120
	I0818 19:11:24.457550   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 20/120
	I0818 19:11:25.458903   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 21/120
	I0818 19:11:26.460140   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 22/120
	I0818 19:11:27.461876   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 23/120
	I0818 19:11:28.463544   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 24/120
	I0818 19:11:29.465053   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 25/120
	I0818 19:11:30.466759   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 26/120
	I0818 19:11:31.468321   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 27/120
	I0818 19:11:32.469904   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 28/120
	I0818 19:11:33.471589   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 29/120
	I0818 19:11:34.473752   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 30/120
	I0818 19:11:35.475406   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 31/120
	I0818 19:11:36.476790   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 32/120
	I0818 19:11:37.478041   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 33/120
	I0818 19:11:38.479512   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 34/120
	I0818 19:11:39.481143   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 35/120
	I0818 19:11:40.482942   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 36/120
	I0818 19:11:41.484397   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 37/120
	I0818 19:11:42.486007   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 38/120
	I0818 19:11:43.487496   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 39/120
	I0818 19:11:44.489411   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 40/120
	I0818 19:11:45.490578   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 41/120
	I0818 19:11:46.492172   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 42/120
	I0818 19:11:47.493873   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 43/120
	I0818 19:11:48.495209   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 44/120
	I0818 19:11:49.497299   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 45/120
	I0818 19:11:50.498690   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 46/120
	I0818 19:11:51.500070   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 47/120
	I0818 19:11:52.501330   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 48/120
	I0818 19:11:53.502686   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 49/120
	I0818 19:11:54.504650   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 50/120
	I0818 19:11:55.506261   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 51/120
	I0818 19:11:56.507517   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 52/120
	I0818 19:11:57.509892   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 53/120
	I0818 19:11:58.511357   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 54/120
	I0818 19:11:59.513303   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 55/120
	I0818 19:12:00.514755   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 56/120
	I0818 19:12:01.516400   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 57/120
	I0818 19:12:02.518044   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 58/120
	I0818 19:12:03.519369   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 59/120
	I0818 19:12:04.521411   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 60/120
	I0818 19:12:05.523368   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 61/120
	I0818 19:12:06.525167   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 62/120
	I0818 19:12:07.526692   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 63/120
	I0818 19:12:08.527987   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 64/120
	I0818 19:12:09.529846   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 65/120
	I0818 19:12:10.531069   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 66/120
	I0818 19:12:11.532557   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 67/120
	I0818 19:12:12.533821   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 68/120
	I0818 19:12:13.535198   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 69/120
	I0818 19:12:14.536501   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 70/120
	I0818 19:12:15.537828   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 71/120
	I0818 19:12:16.539038   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 72/120
	I0818 19:12:17.540228   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 73/120
	I0818 19:12:18.541520   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 74/120
	I0818 19:12:19.543191   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 75/120
	I0818 19:12:20.544868   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 76/120
	I0818 19:12:21.546309   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 77/120
	I0818 19:12:22.547546   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 78/120
	I0818 19:12:23.548761   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 79/120
	I0818 19:12:24.550832   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 80/120
	I0818 19:12:25.552169   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 81/120
	I0818 19:12:26.553671   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 82/120
	I0818 19:12:27.554849   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 83/120
	I0818 19:12:28.555995   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 84/120
	I0818 19:12:29.557747   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 85/120
	I0818 19:12:30.558970   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 86/120
	I0818 19:12:31.560336   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 87/120
	I0818 19:12:32.561864   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 88/120
	I0818 19:12:33.563026   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 89/120
	I0818 19:12:34.565039   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 90/120
	I0818 19:12:35.566289   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 91/120
	I0818 19:12:36.567545   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 92/120
	I0818 19:12:37.568752   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 93/120
	I0818 19:12:38.570058   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 94/120
	I0818 19:12:39.571617   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 95/120
	I0818 19:12:40.573734   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 96/120
	I0818 19:12:41.575016   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 97/120
	I0818 19:12:42.576382   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 98/120
	I0818 19:12:43.577721   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 99/120
	I0818 19:12:44.579580   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 100/120
	I0818 19:12:45.580826   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 101/120
	I0818 19:12:46.583269   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 102/120
	I0818 19:12:47.584514   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 103/120
	I0818 19:12:48.585836   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 104/120
	I0818 19:12:49.587748   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 105/120
	I0818 19:12:50.589597   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 106/120
	I0818 19:12:51.591114   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 107/120
	I0818 19:12:52.592597   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 108/120
	I0818 19:12:53.593961   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 109/120
	I0818 19:12:54.595972   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 110/120
	I0818 19:12:55.597812   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 111/120
	I0818 19:12:56.599047   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 112/120
	I0818 19:12:57.600370   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 113/120
	I0818 19:12:58.601951   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 114/120
	I0818 19:12:59.603793   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 115/120
	I0818 19:13:00.605832   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 116/120
	I0818 19:13:01.607267   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 117/120
	I0818 19:13:02.608687   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 118/120
	I0818 19:13:03.610090   33809 main.go:141] libmachine: (ha-189125-m04) Waiting for machine to stop 119/120
	I0818 19:13:04.611197   33809 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0818 19:13:04.611260   33809 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0818 19:13:04.613100   33809 out.go:201] 
	W0818 19:13:04.614266   33809 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0818 19:13:04.614281   33809 out.go:270] * 
	* 
	W0818 19:13:04.616471   33809 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 19:13:04.617679   33809 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-189125 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr: exit status 3 (18.864413601s)

                                                
                                                
-- stdout --
	ha-189125
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-189125-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:13:04.661782   34250 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:13:04.662006   34250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:13:04.662014   34250 out.go:358] Setting ErrFile to fd 2...
	I0818 19:13:04.662018   34250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:13:04.662195   34250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:13:04.662347   34250 out.go:352] Setting JSON to false
	I0818 19:13:04.662369   34250 mustload.go:65] Loading cluster: ha-189125
	I0818 19:13:04.662428   34250 notify.go:220] Checking for updates...
	I0818 19:13:04.662739   34250 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:13:04.662752   34250 status.go:255] checking status of ha-189125 ...
	I0818 19:13:04.663205   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:13:04.663267   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:13:04.678767   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44959
	I0818 19:13:04.679232   34250 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:13:04.679879   34250 main.go:141] libmachine: Using API Version  1
	I0818 19:13:04.679905   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:13:04.680222   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:13:04.680389   34250 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:13:04.682017   34250 status.go:330] ha-189125 host status = "Running" (err=<nil>)
	I0818 19:13:04.682032   34250 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:13:04.682348   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:13:04.682379   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:13:04.698154   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36943
	I0818 19:13:04.698573   34250 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:13:04.699063   34250 main.go:141] libmachine: Using API Version  1
	I0818 19:13:04.699087   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:13:04.699448   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:13:04.699673   34250 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:13:04.702729   34250 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:13:04.703142   34250 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:13:04.703177   34250 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:13:04.703265   34250 host.go:66] Checking if "ha-189125" exists ...
	I0818 19:13:04.703596   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:13:04.703636   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:13:04.718018   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33827
	I0818 19:13:04.718375   34250 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:13:04.718798   34250 main.go:141] libmachine: Using API Version  1
	I0818 19:13:04.718817   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:13:04.719104   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:13:04.719336   34250 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:13:04.719538   34250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:13:04.719563   34250 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:13:04.722124   34250 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:13:04.722549   34250 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:13:04.722577   34250 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:13:04.722692   34250 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:13:04.722856   34250 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:13:04.722997   34250 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:13:04.723133   34250 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:13:04.804877   34250 ssh_runner.go:195] Run: systemctl --version
	I0818 19:13:04.812631   34250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:13:04.829886   34250 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:13:04.829916   34250 api_server.go:166] Checking apiserver status ...
	I0818 19:13:04.829955   34250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:13:04.846781   34250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5067/cgroup
	W0818 19:13:04.856615   34250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5067/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:13:04.856672   34250 ssh_runner.go:195] Run: ls
	I0818 19:13:04.861145   34250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:13:04.868806   34250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:13:04.868830   34250 status.go:422] ha-189125 apiserver status = Running (err=<nil>)
	I0818 19:13:04.868839   34250 status.go:257] ha-189125 status: &{Name:ha-189125 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:13:04.868863   34250 status.go:255] checking status of ha-189125-m02 ...
	I0818 19:13:04.869193   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:13:04.869229   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:13:04.884319   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38925
	I0818 19:13:04.884748   34250 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:13:04.885200   34250 main.go:141] libmachine: Using API Version  1
	I0818 19:13:04.885221   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:13:04.885514   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:13:04.885684   34250 main.go:141] libmachine: (ha-189125-m02) Calling .GetState
	I0818 19:13:04.887050   34250 status.go:330] ha-189125-m02 host status = "Running" (err=<nil>)
	I0818 19:13:04.887069   34250 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:13:04.887482   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:13:04.887540   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:13:04.904034   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0818 19:13:04.904502   34250 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:13:04.904993   34250 main.go:141] libmachine: Using API Version  1
	I0818 19:13:04.905017   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:13:04.905315   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:13:04.905488   34250 main.go:141] libmachine: (ha-189125-m02) Calling .GetIP
	I0818 19:13:04.908394   34250 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:13:04.908910   34250 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 20:07:51 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:13:04.908938   34250 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:13:04.909103   34250 host.go:66] Checking if "ha-189125-m02" exists ...
	I0818 19:13:04.909554   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:13:04.909612   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:13:04.923940   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35961
	I0818 19:13:04.924375   34250 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:13:04.924833   34250 main.go:141] libmachine: Using API Version  1
	I0818 19:13:04.924853   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:13:04.925244   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:13:04.925469   34250 main.go:141] libmachine: (ha-189125-m02) Calling .DriverName
	I0818 19:13:04.925728   34250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:13:04.925751   34250 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHHostname
	I0818 19:13:04.928547   34250 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:13:04.929015   34250 main.go:141] libmachine: (ha-189125-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:f4:4c", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 20:07:51 +0000 UTC Type:0 Mac:52:54:00:a7:f4:4c Iaid: IPaddr:192.168.39.147 Prefix:24 Hostname:ha-189125-m02 Clientid:01:52:54:00:a7:f4:4c}
	I0818 19:13:04.929045   34250 main.go:141] libmachine: (ha-189125-m02) DBG | domain ha-189125-m02 has defined IP address 192.168.39.147 and MAC address 52:54:00:a7:f4:4c in network mk-ha-189125
	I0818 19:13:04.929198   34250 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHPort
	I0818 19:13:04.929382   34250 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHKeyPath
	I0818 19:13:04.929542   34250 main.go:141] libmachine: (ha-189125-m02) Calling .GetSSHUsername
	I0818 19:13:04.929683   34250 sshutil.go:53] new ssh client: &{IP:192.168.39.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m02/id_rsa Username:docker}
	I0818 19:13:05.009173   34250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:13:05.026825   34250 kubeconfig.go:125] found "ha-189125" server: "https://192.168.39.254:8443"
	I0818 19:13:05.026851   34250 api_server.go:166] Checking apiserver status ...
	I0818 19:13:05.026877   34250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:13:05.042039   34250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup
	W0818 19:13:05.052753   34250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1561/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:13:05.052809   34250 ssh_runner.go:195] Run: ls
	I0818 19:13:05.057634   34250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 19:13:05.061895   34250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 19:13:05.061913   34250 status.go:422] ha-189125-m02 apiserver status = Running (err=<nil>)
	I0818 19:13:05.061921   34250 status.go:257] ha-189125-m02 status: &{Name:ha-189125-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:13:05.061934   34250 status.go:255] checking status of ha-189125-m04 ...
	I0818 19:13:05.062198   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:13:05.062227   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:13:05.078300   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 19:13:05.078678   34250 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:13:05.079132   34250 main.go:141] libmachine: Using API Version  1
	I0818 19:13:05.079154   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:13:05.079460   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:13:05.079632   34250 main.go:141] libmachine: (ha-189125-m04) Calling .GetState
	I0818 19:13:05.081165   34250 status.go:330] ha-189125-m04 host status = "Running" (err=<nil>)
	I0818 19:13:05.081179   34250 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:13:05.081459   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:13:05.081509   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:13:05.096186   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0818 19:13:05.096613   34250 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:13:05.097114   34250 main.go:141] libmachine: Using API Version  1
	I0818 19:13:05.097154   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:13:05.097468   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:13:05.097690   34250 main.go:141] libmachine: (ha-189125-m04) Calling .GetIP
	I0818 19:13:05.100395   34250 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:13:05.100771   34250 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 20:10:32 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:13:05.100807   34250 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:13:05.100940   34250 host.go:66] Checking if "ha-189125-m04" exists ...
	I0818 19:13:05.101275   34250 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:13:05.101309   34250 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:13:05.116090   34250 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35981
	I0818 19:13:05.116597   34250 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:13:05.117077   34250 main.go:141] libmachine: Using API Version  1
	I0818 19:13:05.117096   34250 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:13:05.117381   34250 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:13:05.117574   34250 main.go:141] libmachine: (ha-189125-m04) Calling .DriverName
	I0818 19:13:05.117751   34250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:13:05.117769   34250 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHHostname
	I0818 19:13:05.120500   34250 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:13:05.120892   34250 main.go:141] libmachine: (ha-189125-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:53:ed", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 20:10:32 +0000 UTC Type:0 Mac:52:54:00:36:53:ed Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:ha-189125-m04 Clientid:01:52:54:00:36:53:ed}
	I0818 19:13:05.120913   34250 main.go:141] libmachine: (ha-189125-m04) DBG | domain ha-189125-m04 has defined IP address 192.168.39.252 and MAC address 52:54:00:36:53:ed in network mk-ha-189125
	I0818 19:13:05.121083   34250 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHPort
	I0818 19:13:05.121288   34250 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHKeyPath
	I0818 19:13:05.121432   34250 main.go:141] libmachine: (ha-189125-m04) Calling .GetSSHUsername
	I0818 19:13:05.121553   34250 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125-m04/id_rsa Username:docker}
	W0818 19:13:23.483568   34250 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.252:22: connect: no route to host
	W0818 19:13:23.483643   34250 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.252:22: connect: no route to host
	E0818 19:13:23.483672   34250 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.252:22: connect: no route to host
	I0818 19:13:23.483679   34250 status.go:257] ha-189125-m04 status: &{Name:ha-189125-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0818 19:13:23.483700   34250 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.252:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-189125 -n ha-189125
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-189125 logs -n 25: (1.637113393s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-189125 ssh -n ha-189125-m02 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04:/home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m04 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp testdata/cp-test.txt                                                | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3256308944/001/cp-test_ha-189125-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125:/home/docker/cp-test_ha-189125-m04_ha-189125.txt                       |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125 sudo cat                                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125.txt                                 |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m02:/home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m02 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m03:/home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n                                                                 | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | ha-189125-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-189125 ssh -n ha-189125-m03 sudo cat                                          | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC | 18 Aug 24 19:00 UTC |
	|         | /home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-189125 node stop m02 -v=7                                                     | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:00 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-189125 node start m02 -v=7                                                    | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-189125 -v=7                                                           | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-189125 -v=7                                                                | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:04 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-189125 --wait=true -v=7                                                    | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:06 UTC | 18 Aug 24 19:10 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-189125                                                                | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:10 UTC |                     |
	| node    | ha-189125 node delete m03 -v=7                                                   | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:10 UTC | 18 Aug 24 19:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-189125 stop -v=7                                                              | ha-189125 | jenkins | v1.33.1 | 18 Aug 24 19:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 19:06:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 19:06:05.538758   31924 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:06:05.538887   31924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:06:05.538898   31924 out.go:358] Setting ErrFile to fd 2...
	I0818 19:06:05.538904   31924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:06:05.539085   31924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:06:05.539715   31924 out.go:352] Setting JSON to false
	I0818 19:06:05.540725   31924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2910,"bootTime":1724005056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:06:05.540788   31924 start.go:139] virtualization: kvm guest
	I0818 19:06:05.543300   31924 out.go:177] * [ha-189125] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:06:05.545058   31924 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:06:05.545099   31924 notify.go:220] Checking for updates...
	I0818 19:06:05.548344   31924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:06:05.550157   31924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:06:05.551646   31924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:06:05.552939   31924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:06:05.554202   31924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:06:05.555908   31924 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:06:05.556012   31924 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:06:05.556464   31924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:06:05.556507   31924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:06:05.571659   31924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0818 19:06:05.572189   31924 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:06:05.572752   31924 main.go:141] libmachine: Using API Version  1
	I0818 19:06:05.572772   31924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:06:05.573101   31924 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:06:05.573269   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:06:05.609144   31924 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 19:06:05.610456   31924 start.go:297] selected driver: kvm2
	I0818 19:06:05.610477   31924 start.go:901] validating driver "kvm2" against &{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:06:05.610616   31924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:06:05.610938   31924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:06:05.611029   31924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:06:05.626188   31924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:06:05.626867   31924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:06:05.626936   31924 cni.go:84] Creating CNI manager for ""
	I0818 19:06:05.626945   31924 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 19:06:05.626998   31924 start.go:340] cluster config:
	{Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:06:05.627145   31924 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:06:05.629749   31924 out.go:177] * Starting "ha-189125" primary control-plane node in "ha-189125" cluster
	I0818 19:06:05.631060   31924 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:06:05.631112   31924 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 19:06:05.631137   31924 cache.go:56] Caching tarball of preloaded images
	I0818 19:06:05.631235   31924 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:06:05.631250   31924 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 19:06:05.631437   31924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/config.json ...
	I0818 19:06:05.631646   31924 start.go:360] acquireMachinesLock for ha-189125: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:06:05.631689   31924 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "ha-189125"
	I0818 19:06:05.631703   31924 start.go:96] Skipping create...Using existing machine configuration
	I0818 19:06:05.631713   31924 fix.go:54] fixHost starting: 
	I0818 19:06:05.631994   31924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:06:05.632024   31924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:06:05.646579   31924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46445
	I0818 19:06:05.647087   31924 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:06:05.647625   31924 main.go:141] libmachine: Using API Version  1
	I0818 19:06:05.647652   31924 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:06:05.647950   31924 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:06:05.648157   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:06:05.648335   31924 main.go:141] libmachine: (ha-189125) Calling .GetState
	I0818 19:06:05.649969   31924 fix.go:112] recreateIfNeeded on ha-189125: state=Running err=<nil>
	W0818 19:06:05.649988   31924 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 19:06:05.652796   31924 out.go:177] * Updating the running kvm2 "ha-189125" VM ...
	I0818 19:06:05.654252   31924 machine.go:93] provisionDockerMachine start ...
	I0818 19:06:05.654281   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:06:05.654530   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:05.657169   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.657658   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:05.657685   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.657866   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:05.658045   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.658252   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.658388   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:05.658533   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:06:05.658756   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:06:05.658769   31924 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 19:06:05.764746   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125
	
	I0818 19:06:05.764771   31924 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 19:06:05.765026   31924 buildroot.go:166] provisioning hostname "ha-189125"
	I0818 19:06:05.765049   31924 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 19:06:05.765234   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:05.767589   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.767930   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:05.767963   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.768131   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:05.768305   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.768468   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.768608   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:05.768800   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:06:05.768968   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:06:05.768980   31924 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-189125 && echo "ha-189125" | sudo tee /etc/hostname
	I0818 19:06:05.892450   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-189125
	
	I0818 19:06:05.892479   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:05.895089   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.895515   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:05.895552   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:05.895749   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:05.895944   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.896112   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:05.896267   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:05.896478   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:06:05.896687   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:06:05.896712   31924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-189125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-189125/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-189125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 19:06:06.012988   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:06:06.013020   31924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 19:06:06.013050   31924 buildroot.go:174] setting up certificates
	I0818 19:06:06.013064   31924 provision.go:84] configureAuth start
	I0818 19:06:06.013073   31924 main.go:141] libmachine: (ha-189125) Calling .GetMachineName
	I0818 19:06:06.013376   31924 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:06:06.015862   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.016212   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:06.016239   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.016518   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:06.018792   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.019187   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:06.019209   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.019367   31924 provision.go:143] copyHostCerts
	I0818 19:06:06.019420   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:06:06.019468   31924 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 19:06:06.019484   31924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:06:06.019550   31924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 19:06:06.019619   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:06:06.019637   31924 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 19:06:06.019643   31924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:06:06.019666   31924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 19:06:06.019705   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:06:06.019721   31924 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 19:06:06.019727   31924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:06:06.019753   31924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 19:06:06.019795   31924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.ha-189125 san=[127.0.0.1 192.168.39.49 ha-189125 localhost minikube]
	I0818 19:06:06.169846   31924 provision.go:177] copyRemoteCerts
	I0818 19:06:06.169898   31924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 19:06:06.169920   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:06.172607   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.172994   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:06.173021   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.173168   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:06.173367   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:06.173535   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:06.173677   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:06:06.255494   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 19:06:06.255589   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0818 19:06:06.285898   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 19:06:06.285983   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 19:06:06.315455   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 19:06:06.315537   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 19:06:06.344895   31924 provision.go:87] duration metric: took 331.817623ms to configureAuth
	I0818 19:06:06.344925   31924 buildroot.go:189] setting minikube options for container-runtime
	I0818 19:06:06.345149   31924 config.go:182] Loaded profile config "ha-189125": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:06:06.345233   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:06:06.348058   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.348468   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:06:06.348499   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:06:06.348711   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:06:06.348917   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:06.349070   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:06:06.349322   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:06:06.349540   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:06:06.349706   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:06:06.349723   31924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 19:07:37.264588   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 19:07:37.264616   31924 machine.go:96] duration metric: took 1m31.610346753s to provisionDockerMachine
	I0818 19:07:37.264628   31924 start.go:293] postStartSetup for "ha-189125" (driver="kvm2")
	I0818 19:07:37.264639   31924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 19:07:37.264653   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.264954   31924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 19:07:37.264975   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.268186   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.268633   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.268651   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.268804   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.268979   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.269197   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.269352   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:07:37.351852   31924 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 19:07:37.355885   31924 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 19:07:37.355918   31924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 19:07:37.355987   31924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 19:07:37.356073   31924 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 19:07:37.356097   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 19:07:37.356227   31924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 19:07:37.366106   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:07:37.389557   31924 start.go:296] duration metric: took 124.916601ms for postStartSetup
	I0818 19:07:37.389602   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.389929   31924 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0818 19:07:37.389951   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.392622   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.392982   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.393009   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.393167   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.393351   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.393524   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.393655   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	W0818 19:07:37.478147   31924 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0818 19:07:37.478173   31924 fix.go:56] duration metric: took 1m31.84646119s for fixHost
	I0818 19:07:37.478202   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.480550   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.480917   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.480947   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.481081   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.481302   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.481425   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.481572   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.481736   31924 main.go:141] libmachine: Using SSH client type: native
	I0818 19:07:37.481941   31924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.49 22 <nil> <nil>}
	I0818 19:07:37.481953   31924 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 19:07:37.580007   31924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724008057.547613306
	
	I0818 19:07:37.580031   31924 fix.go:216] guest clock: 1724008057.547613306
	I0818 19:07:37.580040   31924 fix.go:229] Guest: 2024-08-18 19:07:37.547613306 +0000 UTC Remote: 2024-08-18 19:07:37.478186899 +0000 UTC m=+91.975933127 (delta=69.426407ms)
	I0818 19:07:37.580091   31924 fix.go:200] guest clock delta is within tolerance: 69.426407ms
	I0818 19:07:37.580101   31924 start.go:83] releasing machines lock for "ha-189125", held for 1m31.948402543s
	I0818 19:07:37.580140   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.580359   31924 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:07:37.582711   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.583012   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.583037   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.583148   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.583608   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.583763   31924 main.go:141] libmachine: (ha-189125) Calling .DriverName
	I0818 19:07:37.583872   31924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 19:07:37.583920   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.583948   31924 ssh_runner.go:195] Run: cat /version.json
	I0818 19:07:37.583970   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHHostname
	I0818 19:07:37.586059   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.586340   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.586374   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.586395   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.586487   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.586650   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.586710   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:37.586734   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:37.586784   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.586908   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHPort
	I0818 19:07:37.586940   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:07:37.587064   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHKeyPath
	I0818 19:07:37.587190   31924 main.go:141] libmachine: (ha-189125) Calling .GetSSHUsername
	I0818 19:07:37.587323   31924 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/ha-189125/id_rsa Username:docker}
	I0818 19:07:37.660859   31924 ssh_runner.go:195] Run: systemctl --version
	I0818 19:07:37.689591   31924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 19:07:37.849280   31924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 19:07:37.858858   31924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 19:07:37.858931   31924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 19:07:37.868534   31924 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0818 19:07:37.868558   31924 start.go:495] detecting cgroup driver to use...
	I0818 19:07:37.868626   31924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 19:07:37.884705   31924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 19:07:37.898789   31924 docker.go:217] disabling cri-docker service (if available) ...
	I0818 19:07:37.898832   31924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 19:07:37.911736   31924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 19:07:37.925097   31924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 19:07:38.071610   31924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 19:07:38.215462   31924 docker.go:233] disabling docker service ...
	I0818 19:07:38.215526   31924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 19:07:38.233043   31924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 19:07:38.246968   31924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 19:07:38.390025   31924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 19:07:38.535880   31924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 19:07:38.550973   31924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 19:07:38.569703   31924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 19:07:38.569756   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.581001   31924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 19:07:38.581055   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.591705   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.602543   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.612836   31924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 19:07:38.623510   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.634002   31924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.645689   31924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:07:38.656232   31924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 19:07:38.665965   31924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 19:07:38.675565   31924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:07:38.822305   31924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 19:07:39.250990   31924 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 19:07:39.251050   31924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 19:07:39.256028   31924 start.go:563] Will wait 60s for crictl version
	I0818 19:07:39.256087   31924 ssh_runner.go:195] Run: which crictl
	I0818 19:07:39.260146   31924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 19:07:39.297461   31924 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 19:07:39.297549   31924 ssh_runner.go:195] Run: crio --version
	I0818 19:07:39.326109   31924 ssh_runner.go:195] Run: crio --version
	I0818 19:07:39.357322   31924 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 19:07:39.358898   31924 main.go:141] libmachine: (ha-189125) Calling .GetIP
	I0818 19:07:39.361418   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:39.361803   31924 main.go:141] libmachine: (ha-189125) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e9:51:81", ip: ""} in network mk-ha-189125: {Iface:virbr1 ExpiryTime:2024-08-18 19:55:31 +0000 UTC Type:0 Mac:52:54:00:e9:51:81 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:ha-189125 Clientid:01:52:54:00:e9:51:81}
	I0818 19:07:39.361828   31924 main.go:141] libmachine: (ha-189125) DBG | domain ha-189125 has defined IP address 192.168.39.49 and MAC address 52:54:00:e9:51:81 in network mk-ha-189125
	I0818 19:07:39.362028   31924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 19:07:39.366700   31924 kubeadm.go:883] updating cluster {Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 19:07:39.366833   31924 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:07:39.366887   31924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:07:39.419370   31924 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:07:39.419413   31924 crio.go:433] Images already preloaded, skipping extraction
	I0818 19:07:39.419476   31924 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:07:39.452680   31924 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:07:39.452699   31924 cache_images.go:84] Images are preloaded, skipping loading
	I0818 19:07:39.452707   31924 kubeadm.go:934] updating node { 192.168.39.49 8443 v1.31.0 crio true true} ...
	I0818 19:07:39.452822   31924 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-189125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.49
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 19:07:39.452908   31924 ssh_runner.go:195] Run: crio config
	I0818 19:07:39.499803   31924 cni.go:84] Creating CNI manager for ""
	I0818 19:07:39.499824   31924 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0818 19:07:39.499836   31924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 19:07:39.499869   31924 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.49 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-189125 NodeName:ha-189125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.49"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.49 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 19:07:39.500009   31924 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.49
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-189125"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.49
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.49"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 19:07:39.500034   31924 kube-vip.go:115] generating kube-vip config ...
	I0818 19:07:39.500081   31924 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0818 19:07:39.511639   31924 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0818 19:07:39.511782   31924 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0818 19:07:39.511847   31924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 19:07:39.521055   31924 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 19:07:39.521130   31924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0818 19:07:39.530402   31924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0818 19:07:39.546888   31924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 19:07:39.562803   31924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0818 19:07:39.578840   31924 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0818 19:07:39.596494   31924 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0818 19:07:39.600466   31924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:07:39.739066   31924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:07:39.753759   31924 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125 for IP: 192.168.39.49
	I0818 19:07:39.753780   31924 certs.go:194] generating shared ca certs ...
	I0818 19:07:39.753794   31924 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:07:39.753924   31924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 19:07:39.753960   31924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 19:07:39.753971   31924 certs.go:256] generating profile certs ...
	I0818 19:07:39.754042   31924 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/client.key
	I0818 19:07:39.754066   31924 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ef4a23ea
	I0818 19:07:39.754092   31924 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ef4a23ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.49 192.168.39.147 192.168.39.170 192.168.39.254]
	I0818 19:07:39.933649   31924 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ef4a23ea ...
	I0818 19:07:39.933679   31924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ef4a23ea: {Name:mkdc56597df2587c95958d3a0975f94a91bdd52d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:07:39.933872   31924 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ef4a23ea ...
	I0818 19:07:39.933889   31924 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ef4a23ea: {Name:mkbc917195b7b61cd9ba2cfbe30abf338bd83958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:07:39.933991   31924 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt.ef4a23ea -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt
	I0818 19:07:39.934195   31924 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key.ef4a23ea -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key
	I0818 19:07:39.934369   31924 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key
	I0818 19:07:39.934542   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 19:07:39.934608   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 19:07:39.934630   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 19:07:39.934651   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 19:07:39.934672   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 19:07:39.934691   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 19:07:39.934755   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 19:07:39.934775   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 19:07:39.934857   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 19:07:39.934907   31924 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 19:07:39.934921   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 19:07:39.934967   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 19:07:39.935020   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 19:07:39.935052   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 19:07:39.935114   31924 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:07:39.935157   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:07:39.935181   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 19:07:39.935199   31924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 19:07:39.935843   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 19:07:39.960784   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 19:07:39.984252   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 19:07:40.035029   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 19:07:40.113353   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 19:07:40.150748   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 19:07:40.197845   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 19:07:40.223656   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/ha-189125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 19:07:40.255793   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 19:07:40.292028   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 19:07:40.330529   31924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 19:07:40.364569   31924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 19:07:40.392780   31924 ssh_runner.go:195] Run: openssl version
	I0818 19:07:40.399027   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 19:07:40.417718   31924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:07:40.422630   31924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:07:40.422669   31924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:07:40.428925   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 19:07:40.437895   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 19:07:40.448159   31924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 19:07:40.452514   31924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:07:40.452555   31924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 19:07:40.457996   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 19:07:40.466836   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 19:07:40.477133   31924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 19:07:40.481994   31924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:07:40.482035   31924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 19:07:40.487558   31924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 19:07:40.496568   31924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:07:40.501015   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 19:07:40.506578   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 19:07:40.512267   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 19:07:40.517630   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 19:07:40.523626   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 19:07:40.528832   31924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 19:07:40.534232   31924 kubeadm.go:392] StartCluster: {Name:ha-189125 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-189125 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.252 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:07:40.534391   31924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 19:07:40.534450   31924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 19:07:40.570633   31924 cri.go:89] found id: "59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70"
	I0818 19:07:40.570652   31924 cri.go:89] found id: "fb12fc4ec25af08df46345660895d418672cbe1d7db1e593dcd597d263ca8c49"
	I0818 19:07:40.570656   31924 cri.go:89] found id: "7544f3da7d94ec0d1b3fea670c2fdf21384f845a3aba7f87c8df32f49453d08c"
	I0818 19:07:40.570659   31924 cri.go:89] found id: "4b5799e37ce1ed7e355adf8ea7403c2a7f3d4f154a5276d6c9b71220c63e2e61"
	I0818 19:07:40.570662   31924 cri.go:89] found id: "fc3542516c1910d46d9dae2b65572cb275ab4eb3f0640acf0110d44193161c4f"
	I0818 19:07:40.570665   31924 cri.go:89] found id: "181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2"
	I0818 19:07:40.570667   31924 cri.go:89] found id: "f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107"
	I0818 19:07:40.570669   31924 cri.go:89] found id: "197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6"
	I0818 19:07:40.570672   31924 cri.go:89] found id: "d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d"
	I0818 19:07:40.570677   31924 cri.go:89] found id: "f9e43e0af59e65c83cdc09956819ef6523d8d3913d2e585fa3fc1766cce8f7d9"
	I0818 19:07:40.570679   31924 cri.go:89] found id: "79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125"
	I0818 19:07:40.570682   31924 cri.go:89] found id: "972d7a97ac9ef59ff56acb3dd590bba677332247d9bac5f599e58c1a121370c0"
	I0818 19:07:40.570685   31924 cri.go:89] found id: "8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058"
	I0818 19:07:40.570688   31924 cri.go:89] found id: "2d4a0eeafb63103a4880977a90a6daa24bd77f03a6fe3107d06cccb629e9b036"
	I0818 19:07:40.570692   31924 cri.go:89] found id: ""
	I0818 19:07:40.570730   31924 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.086354039Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008404086321279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66b73714-56c9-4b03-bdfe-7bc229a8683a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.090408560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12130177-c714-4976-a2f0-d0e813ea0c12 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.090479485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12130177-c714-4976-a2f0-d0e813ea0c12 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.090823962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355ce87e30385ee6ebe57147018b9dcef3cebb9c14cf6115f6c958193bb4673e,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724008151505840224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724008112502594529,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724008105503005864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d921b23237765708d696d48a2b78d70e5573a279dd5aa8c1c21b3e87f144be,PodSandboxId:c624d083cfe8b1918684e13f427d19717512f77a8b8dd1cbc946119d91dcc4ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724008098865959919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc77b3d0c32bba19cf0f4552096d3a4bd50cca218c4ad3c468ff779fcbdaea05,PodSandboxId:2e17b9ce366791ccbe7be90fac891b3ee72587a6258545530b732532c2cb3a60,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724008079526716639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd66e844c8c1cf0bca8571443427a34e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320,PodSandboxId:bc82656318d897233bf510f47fbafdcf69a77c16ce67a451201b1a6f5f105c89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724008065531808934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d6e5e1d924faa0cbaf2a98331cba44192754b1cda5c19934fea840d5b640d326,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724008065666525967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c,PodSandboxId:bc6acbfa3e23e5510924996ef360ef90c6fe6ebfd49e61957a1c21779571feee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724008065502418956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0201fb0f
3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3,PodSandboxId:a20694505a6e78eb5c262d833a9d46d3cda4ac689f3f9adde8785e843e4e5df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724008065385558143,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6eabe6cb2b27456043492a7a17f7a
a98a7ace76e092cb01972aebd6beca960f,PodSandboxId:c99f76b154ff2d0efeb49b2d69bc06c09e5780eec60473697885123776031967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008065434817846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724008065276873163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724008065267031643,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3,PodSandboxId:631f0bdb59f802be42fbd3ac58ebcb78e5061a0cabcf4e75d0f1b0107762443d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724008065217000978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70,PodSandboxId:ac3b59788a4b90c3842cd67f36786f6348d794e4843874b697034f2559e98b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008060250386766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724007567164702589,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379297354682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379300776150,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724007367338619690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724007363376537147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724007351593272153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724007351506952426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12130177-c714-4976-a2f0-d0e813ea0c12 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.131378828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e345d513-64bd-4cd7-99bb-173240deb026 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.131453651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e345d513-64bd-4cd7-99bb-173240deb026 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.132920598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aefe287d-475b-40f5-a691-c79cd802db86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.133560932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008404133537586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aefe287d-475b-40f5-a691-c79cd802db86 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.134322464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8abd1974-b0db-450f-84bf-f16efa1338ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.134376136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8abd1974-b0db-450f-84bf-f16efa1338ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.135040563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355ce87e30385ee6ebe57147018b9dcef3cebb9c14cf6115f6c958193bb4673e,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724008151505840224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724008112502594529,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724008105503005864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d921b23237765708d696d48a2b78d70e5573a279dd5aa8c1c21b3e87f144be,PodSandboxId:c624d083cfe8b1918684e13f427d19717512f77a8b8dd1cbc946119d91dcc4ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724008098865959919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc77b3d0c32bba19cf0f4552096d3a4bd50cca218c4ad3c468ff779fcbdaea05,PodSandboxId:2e17b9ce366791ccbe7be90fac891b3ee72587a6258545530b732532c2cb3a60,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724008079526716639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd66e844c8c1cf0bca8571443427a34e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320,PodSandboxId:bc82656318d897233bf510f47fbafdcf69a77c16ce67a451201b1a6f5f105c89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724008065531808934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d6e5e1d924faa0cbaf2a98331cba44192754b1cda5c19934fea840d5b640d326,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724008065666525967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c,PodSandboxId:bc6acbfa3e23e5510924996ef360ef90c6fe6ebfd49e61957a1c21779571feee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724008065502418956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0201fb0f
3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3,PodSandboxId:a20694505a6e78eb5c262d833a9d46d3cda4ac689f3f9adde8785e843e4e5df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724008065385558143,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6eabe6cb2b27456043492a7a17f7a
a98a7ace76e092cb01972aebd6beca960f,PodSandboxId:c99f76b154ff2d0efeb49b2d69bc06c09e5780eec60473697885123776031967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008065434817846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724008065276873163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724008065267031643,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3,PodSandboxId:631f0bdb59f802be42fbd3ac58ebcb78e5061a0cabcf4e75d0f1b0107762443d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724008065217000978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70,PodSandboxId:ac3b59788a4b90c3842cd67f36786f6348d794e4843874b697034f2559e98b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008060250386766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724007567164702589,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379297354682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379300776150,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724007367338619690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724007363376537147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724007351593272153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724007351506952426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8abd1974-b0db-450f-84bf-f16efa1338ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.184049501Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ce098d8-5ab6-4b96-b1b0-5b8ec70dfd82 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.184202588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ce098d8-5ab6-4b96-b1b0-5b8ec70dfd82 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.185189665Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1af81935-5c00-4ea2-982e-c6a729df7f60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.185610852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008404185588264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1af81935-5c00-4ea2-982e-c6a729df7f60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.186201921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55d697eb-b758-4646-a6bb-17424becb8bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.186328541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55d697eb-b758-4646-a6bb-17424becb8bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.186735089Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355ce87e30385ee6ebe57147018b9dcef3cebb9c14cf6115f6c958193bb4673e,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724008151505840224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724008112502594529,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724008105503005864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d921b23237765708d696d48a2b78d70e5573a279dd5aa8c1c21b3e87f144be,PodSandboxId:c624d083cfe8b1918684e13f427d19717512f77a8b8dd1cbc946119d91dcc4ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724008098865959919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc77b3d0c32bba19cf0f4552096d3a4bd50cca218c4ad3c468ff779fcbdaea05,PodSandboxId:2e17b9ce366791ccbe7be90fac891b3ee72587a6258545530b732532c2cb3a60,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724008079526716639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd66e844c8c1cf0bca8571443427a34e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320,PodSandboxId:bc82656318d897233bf510f47fbafdcf69a77c16ce67a451201b1a6f5f105c89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724008065531808934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d6e5e1d924faa0cbaf2a98331cba44192754b1cda5c19934fea840d5b640d326,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724008065666525967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c,PodSandboxId:bc6acbfa3e23e5510924996ef360ef90c6fe6ebfd49e61957a1c21779571feee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724008065502418956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0201fb0f
3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3,PodSandboxId:a20694505a6e78eb5c262d833a9d46d3cda4ac689f3f9adde8785e843e4e5df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724008065385558143,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6eabe6cb2b27456043492a7a17f7a
a98a7ace76e092cb01972aebd6beca960f,PodSandboxId:c99f76b154ff2d0efeb49b2d69bc06c09e5780eec60473697885123776031967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008065434817846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724008065276873163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724008065267031643,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3,PodSandboxId:631f0bdb59f802be42fbd3ac58ebcb78e5061a0cabcf4e75d0f1b0107762443d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724008065217000978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70,PodSandboxId:ac3b59788a4b90c3842cd67f36786f6348d794e4843874b697034f2559e98b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008060250386766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724007567164702589,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379297354682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379300776150,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724007367338619690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724007363376537147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724007351593272153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724007351506952426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55d697eb-b758-4646-a6bb-17424becb8bb name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.234801635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abcf5fc3-c15e-41d1-bde3-b54400c93e43 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.234901331Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abcf5fc3-c15e-41d1-bde3-b54400c93e43 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.235939848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d90a669-cc0b-4139-8327-a7a539e1e631 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.236461555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008404236428156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d90a669-cc0b-4139-8327-a7a539e1e631 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.236955928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=966cd019-3f7e-4473-b97a-e94fe3f7594f name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.237008346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=966cd019-3f7e-4473-b97a-e94fe3f7594f name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:13:24 ha-189125 crio[3711]: time="2024-08-18 19:13:24.237522399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:355ce87e30385ee6ebe57147018b9dcef3cebb9c14cf6115f6c958193bb4673e,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724008151505840224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724008112502594529,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724008105503005864,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30d921b23237765708d696d48a2b78d70e5573a279dd5aa8c1c21b3e87f144be,PodSandboxId:c624d083cfe8b1918684e13f427d19717512f77a8b8dd1cbc946119d91dcc4ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724008098865959919,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc77b3d0c32bba19cf0f4552096d3a4bd50cca218c4ad3c468ff779fcbdaea05,PodSandboxId:2e17b9ce366791ccbe7be90fac891b3ee72587a6258545530b732532c2cb3a60,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724008079526716639,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd66e844c8c1cf0bca8571443427a34e,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320,PodSandboxId:bc82656318d897233bf510f47fbafdcf69a77c16ce67a451201b1a6f5f105c89,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724008065531808934,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:d6e5e1d924faa0cbaf2a98331cba44192754b1cda5c19934fea840d5b640d326,PodSandboxId:24016516664de6e9c004d1f50fb917fe607c1c1bc7a95d1543ae3c068398dc97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724008065666525967,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35b948dd-9b74-4f76-9cdb-82e0901fc421,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c,PodSandboxId:bc6acbfa3e23e5510924996ef360ef90c6fe6ebfd49e61957a1c21779571feee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724008065502418956,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0201fb0f
3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3,PodSandboxId:a20694505a6e78eb5c262d833a9d46d3cda4ac689f3f9adde8785e843e4e5df6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724008065385558143,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6eabe6cb2b27456043492a7a17f7a
a98a7ace76e092cb01972aebd6beca960f,PodSandboxId:c99f76b154ff2d0efeb49b2d69bc06c09e5780eec60473697885123776031967,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008065434817846,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0,PodSandboxId:0a7968b9836c3154db550e96c196eccb73fa0793d8ee5c0cfb558fedc586576b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724008065276873163,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd8fdf8c45fd27ad0a1a2caca7c2a9ba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c,PodSandboxId:5a09bb72754f24cc77734d5c4a91bb0ff4064e6cd57d03c86dd31e3b21c958fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724008065267031643,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40d0dc4374e1459bcceafb607ec16a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMess
agePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3,PodSandboxId:631f0bdb59f802be42fbd3ac58ebcb78e5061a0cabcf4e75d0f1b0107762443d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724008065217000978,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70,PodSandboxId:ac3b59788a4b90c3842cd67f36786f6348d794e4843874b697034f2559e98b41,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724008060250386766,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"
name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbf1a420990c95e7188a8a263cde723b15fa1aef63fb54207084c37e99c4721,PodSandboxId:8cdf7a8433c4d7513b6e132057eb47ede199ac02fe1c0c2312bb1225410797c9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724007567164702589,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-kxdwj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2ebdc21-75ca-43ac-86f2-7c492eefe97d,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107,PodSandboxId:0e090955bb301f6e1b92d757986b5520310c5caf961c1cb9f4b875429c496c3b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379297354682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7xr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d4354313-0e2d-4d96-9cd1-a8f69a4aee26,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2,PodSandboxId:c4e0fe307dc9771c68f88d1cade54a12a87ab016c826d07cc9bdcc4c4c8e5919,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724007379300776150,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-q9j97,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1c0597-6624-4a3e-8356-7d23555c2809,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6,PodSandboxId:c93b973b05129eed0a02f6d0648ab7dd06db1c555cfab81343ffc7c4ce308ebd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724007367338619690,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-jwxjh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 086477c9-e6eb-403e-adc7-b15347918484,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d,PodSandboxId:c28cd1212a8c0c4ab0d4479c389c65a5ba385698c40ec83c9ff339c26a97ddcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724007363376537147,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96xwx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3f6dfae-e097-4889-933b-433f1b6b78fe,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125,PodSandboxId:b20bbedf6c01193ec95095059412bc7bfa6efc04d65e9ec34e0b9b85681e45ba,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915a
f3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724007351593272153,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 364cc1fdd234c99256cc8ba25ced6909,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058,PodSandboxId:6fe0bbacb48d2c2e3fc5d4adccb496f5bf5b5501e0873495a2d57c9658886385,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
,State:CONTAINER_EXITED,CreatedAt:1724007351506952426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-189125,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3308648844d3f83b8ab068e71d70c9d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=966cd019-3f7e-4473-b97a-e94fe3f7594f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	355ce87e30385       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       5                   24016516664de       storage-provisioner
	b4bdc19004cc6       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   0a7968b9836c3       kube-apiserver-ha-189125
	391bbfbf5a674       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   5a09bb72754f2       kube-controller-manager-ha-189125
	30d921b232377       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   c624d083cfe8b       busybox-7dff88458-kxdwj
	fc77b3d0c32bb       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   2e17b9ce36679       kube-vip-ha-189125
	d6e5e1d924faa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   24016516664de       storage-provisioner
	2e5bf40065d4e       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   bc82656318d89       kube-proxy-96xwx
	d282d86ace46d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   bc6acbfa3e23e       kindnet-jwxjh
	d6eabe6cb2b27       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   c99f76b154ff2       coredns-6f6b679f8f-q9j97
	e0201fb0f3e91       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   a20694505a6e7       kube-scheduler-ha-189125
	a9d10442a1782       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   0a7968b9836c3       kube-apiserver-ha-189125
	ce8ceea70f09e       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   5a09bb72754f2       kube-controller-manager-ha-189125
	504747e4441ad       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   631f0bdb59f80       etcd-ha-189125
	59c8ad3c2bd21       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   ac3b59788a4b9       coredns-6f6b679f8f-7xr26
	1cbf1a420990c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   8cdf7a8433c4d       busybox-7dff88458-kxdwj
	181bcd36f89b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   c4e0fe307dc97       coredns-6f6b679f8f-q9j97
	f095c1d3ba818       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      17 minutes ago      Exited              coredns                   0                   0e090955bb301       coredns-6f6b679f8f-7xr26
	197dd2bffa6c8       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    17 minutes ago      Exited              kindnet-cni               0                   c93b973b05129       kindnet-jwxjh
	d3f078fad6871       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      17 minutes ago      Exited              kube-proxy                0                   c28cd1212a8c0       kube-proxy-96xwx
	79fc87641651d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      17 minutes ago      Exited              etcd                      0                   b20bbedf6c011       etcd-ha-189125
	8eb7a6513c9b9       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      17 minutes ago      Exited              kube-scheduler            0                   6fe0bbacb48d2       kube-scheduler-ha-189125
	
	
	==> coredns [181bcd36f89b86e660da339f796b6cd9b3481916035a524978f64f62de3a9ce2] <==
	[INFO] 10.244.2.2:56571 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00135054s
	[INFO] 10.244.2.2:43437 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086979s
	[INFO] 10.244.0.4:53861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002025942s
	[INFO] 10.244.0.4:36847 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001326246s
	[INFO] 10.244.0.4:36223 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000073856s
	[INFO] 10.244.0.4:53397 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051079s
	[INFO] 10.244.0.4:60257 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077527s
	[INFO] 10.244.1.2:36105 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142033s
	[INFO] 10.244.2.2:43159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120043s
	[INFO] 10.244.2.2:48451 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105513s
	[INFO] 10.244.2.2:40617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090209s
	[INFO] 10.244.2.2:53467 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079345s
	[INFO] 10.244.0.4:34375 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009177s
	[INFO] 10.244.0.4:47256 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098542s
	[INFO] 10.244.0.4:38739 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087517s
	[INFO] 10.244.1.2:44329 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157424s
	[INFO] 10.244.1.2:52970 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000328904s
	[INFO] 10.244.2.2:35139 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00010364s
	[INFO] 10.244.2.2:51553 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000143049s
	[INFO] 10.244.0.4:55737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097209s
	[INFO] 10.244.0.4:56754 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000040314s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1952&timeout=6m56s&timeoutSeconds=416&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1950&timeout=7m25s&timeoutSeconds=445&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [59c8ad3c2bd215874ab8bf8bfdd9feb3b19dcd6319385e6f5f6d03c632a16a70] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[127350764]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:07:48.738) (total time: 10001ms):
	Trace[127350764]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:07:58.739)
	Trace[127350764]: [10.001416775s] [10.001416775s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37038->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:37038->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50170->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:50170->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d6eabe6cb2b27456043492a7a17f7aa98a7ace76e092cb01972aebd6beca960f] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:38038->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:38038->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38014->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[964636579]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (18-Aug-2024 19:07:57.255) (total time: 10792ms):
	Trace[964636579]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38014->10.96.0.1:443: read: connection reset by peer 10792ms (19:08:08.048)
	Trace[964636579]: [10.792609462s] [10.792609462s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38014->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f095c1d3ba8180f100932a101ab419e1ffe8f20ce6f02a8eb04d3b83249f6107] <==
	[INFO] 10.244.0.4:50813 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001489232s
	[INFO] 10.244.1.2:44640 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003618953s
	[INFO] 10.244.1.2:37984 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161286s
	[INFO] 10.244.2.2:55904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150006s
	[INFO] 10.244.2.2:38276 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00189507s
	[INFO] 10.244.2.2:42054 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179179s
	[INFO] 10.244.2.2:35911 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190164s
	[INFO] 10.244.2.2:52357 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000163157s
	[INFO] 10.244.0.4:38374 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136266s
	[INFO] 10.244.0.4:33983 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103666s
	[INFO] 10.244.0.4:42233 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000069982s
	[INFO] 10.244.1.2:39502 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134749s
	[INFO] 10.244.1.2:38715 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102799s
	[INFO] 10.244.1.2:55122 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135608s
	[INFO] 10.244.0.4:56934 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000488s
	[INFO] 10.244.1.2:45200 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251667s
	[INFO] 10.244.1.2:35239 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131205s
	[INFO] 10.244.2.2:47108 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152092s
	[INFO] 10.244.2.2:45498 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093397s
	[INFO] 10.244.0.4:52889 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059058s
	[INFO] 10.244.0.4:55998 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000042989s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1952&timeout=9m4s&timeoutSeconds=544&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1952&timeout=5m40s&timeoutSeconds=340&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-189125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T18_55_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:55:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:13:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:08:31 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:08:31 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:08:31 +0000   Sun, 18 Aug 2024 18:55:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:08:31 +0000   Sun, 18 Aug 2024 18:56:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.49
	  Hostname:    ha-189125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9520f8bfe7ab47fca640aa213dbc51c5
	  System UUID:                9520f8bf-e7ab-47fc-a640-aa213dbc51c5
	  Boot ID:                    d5000132-c81a-4416-b5cd-bc4cc58a7c4a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kxdwj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-6f6b679f8f-7xr26             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-6f6b679f8f-q9j97             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-ha-189125                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kindnet-jwxjh                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-apiserver-ha-189125             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-189125    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-96xwx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-189125             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-189125                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m55s                  kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-189125 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-189125 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-189125 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                    node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal   NodeReady                17m                    kubelet          Node ha-189125 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Warning  ContainerGCFailed        6m27s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m43s (x3 over 6m32s)  kubelet          Node ha-189125 status is now: NodeNotReady
	  Normal   RegisteredNode           5m1s                   node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal   RegisteredNode           4m47s                  node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	  Normal   RegisteredNode           3m21s                  node-controller  Node ha-189125 event: Registered Node ha-189125 in Controller
	
	
	Name:               ha-189125-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T18_57_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:57:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:13:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:09:10 +0000   Sun, 18 Aug 2024 19:08:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:09:10 +0000   Sun, 18 Aug 2024 19:08:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:09:10 +0000   Sun, 18 Aug 2024 19:08:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:09:10 +0000   Sun, 18 Aug 2024 19:08:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.147
	  Hostname:    ha-189125-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3324dc2b927f496881437c52ed831dff
	  System UUID:                3324dc2b-927f-4968-8143-7c52ed831dff
	  Boot ID:                    4823ca56-5a42-4c8c-8af0-f183e470fe0f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8bwfj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-189125-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-qhnpv                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-189125-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-189125-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-scwlr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-189125-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-189125-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                    node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-189125-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-189125-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-189125-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-189125-m02 status is now: NodeNotReady
	  Normal  Starting                 5m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-189125-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-189125-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node ha-189125-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m1s                   node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  RegisteredNode           4m47s                  node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	  Normal  RegisteredNode           3m21s                  node-controller  Node ha-189125-m02 event: Registered Node ha-189125-m02 in Controller
	
	
	Name:               ha-189125-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-189125-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=ha-189125
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T19_00_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:00:00 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-189125-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:10:57 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 18 Aug 2024 19:10:37 +0000   Sun, 18 Aug 2024 19:11:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 18 Aug 2024 19:10:37 +0000   Sun, 18 Aug 2024 19:11:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 18 Aug 2024 19:10:37 +0000   Sun, 18 Aug 2024 19:11:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 18 Aug 2024 19:10:37 +0000   Sun, 18 Aug 2024 19:11:38 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    ha-189125-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aaeec6aea01d4746832fda2dc541437c
	  System UUID:                aaeec6ae-a01d-4746-832f-da2dc541437c
	  Boot ID:                    86399eab-ba5f-4a2b-9081-1c1e40769c26
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-2zjk8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-24hmx              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-krtg7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-189125-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-189125-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-189125-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-189125-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m1s                   node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   RegisteredNode           4m47s                  node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   RegisteredNode           3m21s                  node-controller  Node ha-189125-m04 event: Registered Node ha-189125-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-189125-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-189125-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-189125-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-189125-m04 has been rebooted, boot id: 86399eab-ba5f-4a2b-9081-1c1e40769c26
	  Normal   NodeReady                2m47s                  kubelet          Node ha-189125-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 4m21s)   node-controller  Node ha-189125-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.511172] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.059835] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053311] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195743] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.133817] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.270401] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.027475] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +4.080385] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.059467] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.140089] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.075123] kauditd_printk_skb: 79 callbacks suppressed
	[Aug18 18:56] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.023234] kauditd_printk_skb: 36 callbacks suppressed
	[Aug18 18:57] kauditd_printk_skb: 26 callbacks suppressed
	[Aug18 19:04] kauditd_printk_skb: 1 callbacks suppressed
	[Aug18 19:07] systemd-fstab-generator[3630]: Ignoring "noauto" option for root device
	[  +0.146860] systemd-fstab-generator[3642]: Ignoring "noauto" option for root device
	[  +0.178026] systemd-fstab-generator[3656]: Ignoring "noauto" option for root device
	[  +0.138747] systemd-fstab-generator[3668]: Ignoring "noauto" option for root device
	[  +0.292822] systemd-fstab-generator[3697]: Ignoring "noauto" option for root device
	[  +0.912666] systemd-fstab-generator[3796]: Ignoring "noauto" option for root device
	[  +5.310048] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.263628] kauditd_printk_skb: 75 callbacks suppressed
	[Aug18 19:08] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [504747e4441adf67e2eacc5d7aba412da818e7a6836ec477bcb76ad48c25aae3] <==
	{"level":"info","ts":"2024-08-18T19:09:57.054620Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7f2a407b6bb4eb12","to":"5e74e6c9f0774ce1","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-18T19:09:57.054683Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:09:57.067248Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7f2a407b6bb4eb12","to":"5e74e6c9f0774ce1","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-18T19:09:57.067308Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:10:51.036072Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7f2a407b6bb4eb12 switched to configuration voters=(9163207290670869266 9416963694674588667)"}
	{"level":"info","ts":"2024-08-18T19:10:51.044938Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"28c39da372138ae1","local-member-id":"7f2a407b6bb4eb12","removed-remote-peer-id":"5e74e6c9f0774ce1","removed-remote-peer-urls":["https://192.168.39.170:2380"]}
	{"level":"info","ts":"2024-08-18T19:10:51.045058Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"warn","ts":"2024-08-18T19:10:51.045135Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"7f2a407b6bb4eb12","removed-member-id":"5e74e6c9f0774ce1"}
	{"level":"warn","ts":"2024-08-18T19:10:51.045206Z","caller":"rafthttp/peer.go:198","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-08-18T19:10:51.045267Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"7f2a407b6bb4eb12","removed-member-id":"5e74e6c9f0774ce1"}
	{"level":"warn","ts":"2024-08-18T19:10:51.045278Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-08-18T19:10:51.046207Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:10:51.046278Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"warn","ts":"2024-08-18T19:10:51.047212Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:10:51.047289Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:10:51.047412Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"warn","ts":"2024-08-18T19:10:51.047678Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1","error":"context canceled"}
	{"level":"warn","ts":"2024-08-18T19:10:51.047733Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"5e74e6c9f0774ce1","error":"failed to read 5e74e6c9f0774ce1 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-18T19:10:51.047817Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"warn","ts":"2024-08-18T19:10:51.047997Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1","error":"context canceled"}
	{"level":"info","ts":"2024-08-18T19:10:51.048046Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:10:51.048122Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:10:51.048171Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"7f2a407b6bb4eb12","removed-remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"warn","ts":"2024-08-18T19:10:51.067414Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"7f2a407b6bb4eb12","remote-peer-id-stream-handler":"7f2a407b6bb4eb12","remote-peer-id-from":"5e74e6c9f0774ce1"}
	{"level":"warn","ts":"2024-08-18T19:10:51.067779Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"7f2a407b6bb4eb12","remote-peer-id-stream-handler":"7f2a407b6bb4eb12","remote-peer-id-from":"5e74e6c9f0774ce1"}
	
	
	==> etcd [79fc87641651dabfc6bab9c837bf4d14bc29a201c8f4a4bbd485360f54e5c125] <==
	{"level":"warn","ts":"2024-08-18T19:06:06.482856Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:06:05.905375Z","time spent":"577.478607ms","remote":"127.0.0.1:57982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" limit:10000 "}
	2024/08/18 19:06:06 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-18T19:06:06.745825Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.49:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:06:06.745884Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.49:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:06:06.746053Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7f2a407b6bb4eb12","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-18T19:06:06.746358Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746406Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746457Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746577Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746639Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746688Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746716Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"82afc6964bd433fb"}
	{"level":"info","ts":"2024-08-18T19:06:06.746753Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.746780Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.746819Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.746910Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.746956Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.747002Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7f2a407b6bb4eb12","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.747029Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5e74e6c9f0774ce1"}
	{"level":"info","ts":"2024-08-18T19:06:06.750920Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.49:2380"}
	{"level":"warn","ts":"2024-08-18T19:06:06.751038Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.85943136s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-18T19:06:06.751181Z","caller":"traceutil/trace.go:171","msg":"trace[1867387594] range","detail":"{range_begin:; range_end:; }","duration":"8.859594342s","start":"2024-08-18T19:05:57.891576Z","end":"2024-08-18T19:06:06.751170Z","steps":["trace[1867387594] 'agreement among raft nodes before linearized reading'  (duration: 8.859427951s)"],"step_count":1}
	{"level":"error","ts":"2024-08-18T19:06:06.751301Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-18T19:06:06.751060Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.49:2380"}
	{"level":"info","ts":"2024-08-18T19:06:06.751464Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-189125","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.49:2380"],"advertise-client-urls":["https://192.168.39.49:2379"]}
	
	
	==> kernel <==
	 19:13:24 up 18 min,  0 users,  load average: 0.14, 0.38, 0.28
	Linux ha-189125 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [197dd2bffa6c8d9fcb1b2fdfb39a5da0cacbbd03abd31f76da871095c2ff67f6] <==
	I0818 19:05:28.445052       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:05:38.445280       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:05:38.445397       1 main.go:299] handling current node
	I0818 19:05:38.445431       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:05:38.445450       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:05:38.445656       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:05:38.445685       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:05:38.445753       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:05:38.445771       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:05:48.446996       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:05:48.447215       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:05:48.447437       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:05:48.447494       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:05:48.447630       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:05:48.447687       1 main.go:299] handling current node
	I0818 19:05:48.447726       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:05:48.447801       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:05:58.452044       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:05:58.452153       1 main.go:299] handling current node
	I0818 19:05:58.452174       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:05:58.452180       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:05:58.452360       1 main.go:295] Handling node with IPs: map[192.168.39.170:{}]
	I0818 19:05:58.452386       1 main.go:322] Node ha-189125-m03 has CIDR [10.244.2.0/24] 
	I0818 19:05:58.452466       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:05:58.452486       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d282d86ace46d3392f4ae6ef71bf3f81923f36f71929c6d68fc20a774adb2d9c] <==
	I0818 19:12:36.680878       1 main.go:299] handling current node
	I0818 19:12:46.674850       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:12:46.674989       1 main.go:299] handling current node
	I0818 19:12:46.675020       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:12:46.675042       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:12:46.675298       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:12:46.675336       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:12:56.683639       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:12:56.683700       1 main.go:299] handling current node
	I0818 19:12:56.683715       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:12:56.683721       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:12:56.683865       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:12:56.683872       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:13:06.682563       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:13:06.682696       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	I0818 19:13:06.682885       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:13:06.682912       1 main.go:299] handling current node
	I0818 19:13:06.682935       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:13:06.682952       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:13:16.676207       1 main.go:295] Handling node with IPs: map[192.168.39.49:{}]
	I0818 19:13:16.676332       1 main.go:299] handling current node
	I0818 19:13:16.676371       1 main.go:295] Handling node with IPs: map[192.168.39.147:{}]
	I0818 19:13:16.676390       1 main.go:322] Node ha-189125-m02 has CIDR [10.244.1.0/24] 
	I0818 19:13:16.676589       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0818 19:13:16.676640       1 main.go:322] Node ha-189125-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a9d10442a178245c874410928f5280c9631269c3d671d7ce51c0168aee4ee4f0] <==
	I0818 19:07:45.812312       1 options.go:228] external host was not specified, using 192.168.39.49
	I0818 19:07:45.825316       1 server.go:142] Version: v1.31.0
	I0818 19:07:45.825520       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:07:46.954894       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0818 19:07:46.965923       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:07:46.969890       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0818 19:07:46.969957       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0818 19:07:46.970219       1 instance.go:232] Using reconciler: lease
	W0818 19:08:06.952534       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0818 19:08:06.952539       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0818 19:08:06.971359       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [b4bdc19004cc633f4724b2894295035b21efe04634592ff121dc05fd973c211e] <==
	I0818 19:08:34.354802       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0818 19:08:34.353390       1 controller.go:142] Starting OpenAPI controller
	I0818 19:08:34.368666       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0818 19:08:34.368794       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:08:34.423464       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:08:34.423569       1 policy_source.go:224] refreshing policies
	I0818 19:08:34.441187       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 19:08:34.452332       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 19:08:34.452440       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0818 19:08:34.452474       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 19:08:34.453718       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 19:08:34.453809       1 aggregator.go:171] initial CRD sync complete...
	I0818 19:08:34.453826       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 19:08:34.453830       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 19:08:34.453834       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:08:34.453981       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 19:08:34.454565       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0818 19:08:34.454859       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 19:08:34.454876       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0818 19:08:34.460453       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 19:08:34.508993       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 19:08:35.361915       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0818 19:08:35.672650       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.147 192.168.39.49]
	I0818 19:08:35.674224       1 controller.go:615] quota admission added evaluator for: endpoints
	I0818 19:08:35.680477       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [391bbfbf5a674236029dba37cd3dfa5d1bee92feccc3ebf25f649fc07d70e432] <==
	I0818 19:10:47.854120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.539µs"
	I0818 19:10:49.734673       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="50.016µs"
	I0818 19:10:50.741973       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.417µs"
	I0818 19:10:50.753488       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.158µs"
	I0818 19:10:51.030898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.224828ms"
	I0818 19:10:51.031159       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="99.553µs"
	I0818 19:11:02.111900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-189125-m04"
	I0818 19:11:02.112222       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m03"
	E0818 19:11:02.174254       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-189125-m03\", UID:\"e1a46641-91b8-4229-8eba-eba2ed69c93b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}
, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-189125-m03\", UID:\"b0b415b4-cbda-446b-aaf6-fd9e29041be3\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-189125-m03\" not found" logger="UnhandledError"
	E0818 19:11:17.582197       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	E0818 19:11:17.582342       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	E0818 19:11:17.582372       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	E0818 19:11:17.582397       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	E0818 19:11:17.582421       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	E0818 19:11:37.583403       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	E0818 19:11:37.583463       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	E0818 19:11:37.583474       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	E0818 19:11:37.583482       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	E0818 19:11:37.583489       1 gc_controller.go:151] "Failed to get node" err="node \"ha-189125-m03\" not found" logger="pod-garbage-collector-controller" node="ha-189125-m03"
	I0818 19:11:38.339913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:11:38.363622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:11:38.426568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.864631ms"
	I0818 19:11:38.426711       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.608µs"
	I0818 19:11:42.666843       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	I0818 19:11:43.451937       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-189125-m04"
	
	
	==> kube-controller-manager [ce8ceea70f09e6517a446aaa327d05eab1c74d5724a3d782ffe17af224c52c6c] <==
	I0818 19:07:46.785450       1 serving.go:386] Generated self-signed cert in-memory
	I0818 19:07:47.186390       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0818 19:07:47.186491       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:07:47.188272       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0818 19:07:47.188922       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:07:47.189169       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:07:47.189277       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0818 19:08:07.976607       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.49:8443/healthz\": dial tcp 192.168.39.49:8443: connect: connection refused"
	
	
	==> kube-proxy [2e5bf40065d4ef2f75f45ece5e0a6f27ccb5375034fd24a9423bded4c3163320] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:07:50.191633       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0818 19:07:53.263872       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0818 19:07:56.336461       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0818 19:08:02.480780       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0818 19:08:11.695765       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-189125\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0818 19:08:28.673396       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.49"]
	E0818 19:08:28.673542       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:08:28.726596       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:08:28.726672       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:08:28.726713       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:08:28.729485       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:08:28.730045       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:08:28.730126       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:08:28.731941       1 config.go:197] "Starting service config controller"
	I0818 19:08:28.732001       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:08:28.732042       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:08:28.732063       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:08:28.733706       1 config.go:326] "Starting node config controller"
	I0818 19:08:28.733732       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:08:28.832679       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:08:28.832752       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:08:28.833821       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d3f078fad6871bfb3014e63c1e33e153150e715af71f8f2ace3d40434f7bb92d] <==
	E0818 19:04:48.751475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:48.751549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:48.751585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:48.751700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:48.751751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:55.087513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:55.087609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:55.087704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:55.087742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:04:58.159575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:04:58.159649       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:04.305152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:04.305338       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:07.377040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:07.377177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:10.448718       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:10.448835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:22.735833       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:22.736963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-189125&resourceVersion=1928\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:28.880688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:28.881025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1904\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:05:35.024375       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:05:35.024590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0818 19:06:05.743666       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854": dial tcp 192.168.39.254:8443: connect: no route to host
	E0818 19:06:05.743763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1854\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [8eb7a6513c9b9ebaccd24253275567a37ab89ede5c3c547a3fa061b4454a9058] <==
	E0818 18:58:54.898809       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24xql\": pod kindnet-24xql is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-24xql" node="ha-189125-m03"
	E0818 18:58:54.898876       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod ba1034b3-04c9-4c64-8fde-7b45ea42f21c(kube-system/kindnet-24xql) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24xql"
	E0818 18:58:54.898900       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24xql\": pod kindnet-24xql is already assigned to node \"ha-189125-m03\"" pod="kube-system/kindnet-24xql"
	I0818 18:58:54.898918       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24xql" node="ha-189125-m03"
	E0818 18:59:23.602753       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8bwfj\": pod busybox-7dff88458-8bwfj is already assigned to node \"ha-189125-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-8bwfj" node="ha-189125-m02"
	E0818 18:59:23.602879       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-8bwfj\": pod busybox-7dff88458-8bwfj is already assigned to node \"ha-189125-m02\"" pod="default/busybox-7dff88458-8bwfj"
	E0818 18:59:23.652419       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fvdcn\": pod busybox-7dff88458-fvdcn is already assigned to node \"ha-189125-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fvdcn" node="ha-189125-m03"
	E0818 18:59:23.652848       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 19fc5686-7021-4b6f-a097-71f7b6d6a76e(default/busybox-7dff88458-fvdcn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fvdcn"
	E0818 18:59:23.652953       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fvdcn\": pod busybox-7dff88458-fvdcn is already assigned to node \"ha-189125-m03\"" pod="default/busybox-7dff88458-fvdcn"
	I0818 18:59:23.653004       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fvdcn" node="ha-189125-m03"
	E0818 18:59:23.653552       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kxdwj\": pod busybox-7dff88458-kxdwj is already assigned to node \"ha-189125\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-kxdwj" node="ha-189125"
	E0818 18:59:23.655579       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e2ebdc21-75ca-43ac-86f2-7c492eefe97d(default/busybox-7dff88458-kxdwj) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-kxdwj"
	E0818 18:59:23.655718       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kxdwj\": pod busybox-7dff88458-kxdwj is already assigned to node \"ha-189125\"" pod="default/busybox-7dff88458-kxdwj"
	I0818 18:59:23.655773       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-kxdwj" node="ha-189125"
	E0818 19:05:57.627368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0818 19:05:57.667171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0818 19:05:58.307794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0818 19:05:58.761376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0818 19:05:59.522211       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0818 19:05:59.626204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0818 19:06:00.175363       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0818 19:06:03.437897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0818 19:06:04.315214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0818 19:06:05.316192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0818 19:06:06.462220       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e0201fb0f3e916c7b1f5ba132d8bf6471b5ebc96d8cc6cfaaaf2f7bef1dde6d3] <==
	W0818 19:08:25.884567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.49:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:25.884652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.49:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:27.018018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.49:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:27.018158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.49:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:27.056979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.49:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:27.057133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.49:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:27.546667       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.49:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:27.546806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.49:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:27.773556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.49:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:27.773670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.49:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:28.269507       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.49:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:28.269616       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.39.49:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:28.508870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.49:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:28.508938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.49:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:29.006555       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.49:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:29.006618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.49:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:29.920886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.49:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:29.920985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.49:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:08:30.450767       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.49:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.49:8443: connect: connection refused
	E0818 19:08:30.450875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.49:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.49:8443: connect: connection refused" logger="UnhandledError"
	I0818 19:08:52.088190       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0818 19:10:47.665417       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2zjk8\": pod busybox-7dff88458-2zjk8 is already assigned to node \"ha-189125-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-2zjk8" node="ha-189125-m04"
	E0818 19:10:47.666940       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod f9d8586b-06b4-447e-a0b9-0a3365e27a28(default/busybox-7dff88458-2zjk8) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-2zjk8"
	E0818 19:10:47.667150       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-2zjk8\": pod busybox-7dff88458-2zjk8 is already assigned to node \"ha-189125-m04\"" pod="default/busybox-7dff88458-2zjk8"
	I0818 19:10:47.667347       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-2zjk8" node="ha-189125-m04"
	
	
	==> kubelet <==
	Aug 18 19:11:57 ha-189125 kubelet[1332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:11:57 ha-189125 kubelet[1332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:11:57 ha-189125 kubelet[1332]: E0818 19:11:57.761970    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008317761512397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:11:57 ha-189125 kubelet[1332]: E0818 19:11:57.761996    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008317761512397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:07 ha-189125 kubelet[1332]: E0818 19:12:07.763759    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008327763312925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:07 ha-189125 kubelet[1332]: E0818 19:12:07.764170    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008327763312925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:17 ha-189125 kubelet[1332]: E0818 19:12:17.765669    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008337765316658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:17 ha-189125 kubelet[1332]: E0818 19:12:17.765948    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008337765316658,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:27 ha-189125 kubelet[1332]: E0818 19:12:27.767920    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008347767570938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:27 ha-189125 kubelet[1332]: E0818 19:12:27.767966    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008347767570938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:37 ha-189125 kubelet[1332]: E0818 19:12:37.769975    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008357769681506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:37 ha-189125 kubelet[1332]: E0818 19:12:37.770056    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008357769681506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:47 ha-189125 kubelet[1332]: E0818 19:12:47.772680    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008367771927348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:47 ha-189125 kubelet[1332]: E0818 19:12:47.773072    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008367771927348,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:57 ha-189125 kubelet[1332]: E0818 19:12:57.532724    1332 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:12:57 ha-189125 kubelet[1332]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:12:57 ha-189125 kubelet[1332]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:12:57 ha-189125 kubelet[1332]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:12:57 ha-189125 kubelet[1332]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:12:57 ha-189125 kubelet[1332]: E0818 19:12:57.774705    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008377774356572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:12:57 ha-189125 kubelet[1332]: E0818 19:12:57.774732    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008377774356572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:13:07 ha-189125 kubelet[1332]: E0818 19:13:07.777544    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008387776941844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:13:07 ha-189125 kubelet[1332]: E0818 19:13:07.777639    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008387776941844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:13:17 ha-189125 kubelet[1332]: E0818 19:13:17.779567    1332 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008397779210571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:13:17 ha-189125 kubelet[1332]: E0818 19:13:17.779830    1332 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724008397779210571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:13:23.803570   34411 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-7747/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-189125 -n ha-189125
helpers_test.go:261: (dbg) Run:  kubectl --context ha-189125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.58s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-387803
mount_start_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p mount-start-2-387803: exit status 80 (26.965774018s)

                                                
                                                
-- stdout --
	* [mount-start-2-387803] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster mount-start-2-387803
	* Restarting existing kvm2 VM for "mount-start-2-387803" ...
	* Updating the running kvm2 "mount-start-2-387803" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* Failed to start kvm2 VM. Running "minikube delete -p mount-start-2-387803" may fix it: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: ssh command error:
	command : sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	err     : Process exited with status 1
	output  : 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	A dependency job for crio.service failed. See 'journalctl -xe' for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:168: restart failed: "out/minikube-linux-amd64 start -p mount-start-2-387803" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p mount-start-2-387803 -n mount-start-2-387803
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p mount-start-2-387803 -n mount-start-2-387803: exit status 6 (213.077123ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:24:14.993042   40212 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-2-387803" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-2-387803" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/RestartStopped (27.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-048993
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-048993
E0818 19:29:26.647651   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-048993: exit status 82 (2m1.855857024s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-048993-m03"  ...
	* Stopping node "multinode-048993-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-048993" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048993 --wait=true -v=8 --alsologtostderr
E0818 19:31:44.018634   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:32:29.711701   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048993 --wait=true -v=8 --alsologtostderr: (3m23.429746437s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-048993
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-048993 -n multinode-048993
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-048993 logs -n 25: (1.516519822s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m02:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1791348439/001/cp-test_multinode-048993-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m02:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993:/home/docker/cp-test_multinode-048993-m02_multinode-048993.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n multinode-048993 sudo cat                                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-048993-m02_multinode-048993.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m02:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03:/home/docker/cp-test_multinode-048993-m02_multinode-048993-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n multinode-048993-m03 sudo cat                                   | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-048993-m02_multinode-048993-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp testdata/cp-test.txt                                                | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m03:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1791348439/001/cp-test_multinode-048993-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m03:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993:/home/docker/cp-test_multinode-048993-m03_multinode-048993.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n multinode-048993 sudo cat                                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-048993-m03_multinode-048993.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m03:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m02:/home/docker/cp-test_multinode-048993-m03_multinode-048993-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n multinode-048993-m02 sudo cat                                   | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-048993-m03_multinode-048993-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-048993 node stop m03                                                          | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	| node    | multinode-048993 node start                                                             | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-048993                                                                | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC |                     |
	| stop    | -p multinode-048993                                                                     | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC |                     |
	| start   | -p multinode-048993                                                                     | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:29 UTC | 18 Aug 24 19:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-048993                                                                | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 19:29:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 19:29:57.518525   43974 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:29:57.518728   43974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:29:57.518736   43974 out.go:358] Setting ErrFile to fd 2...
	I0818 19:29:57.518740   43974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:29:57.518899   43974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:29:57.519446   43974 out.go:352] Setting JSON to false
	I0818 19:29:57.520339   43974 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4341,"bootTime":1724005056,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:29:57.520390   43974 start.go:139] virtualization: kvm guest
	I0818 19:29:57.523332   43974 out.go:177] * [multinode-048993] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:29:57.524802   43974 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:29:57.524806   43974 notify.go:220] Checking for updates...
	I0818 19:29:57.526221   43974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:29:57.527651   43974 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:29:57.528970   43974 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:29:57.530113   43974 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:29:57.531364   43974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:29:57.533046   43974 config.go:182] Loaded profile config "multinode-048993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:29:57.533148   43974 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:29:57.533584   43974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:29:57.533652   43974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:29:57.548341   43974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44071
	I0818 19:29:57.548762   43974 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:29:57.549244   43974 main.go:141] libmachine: Using API Version  1
	I0818 19:29:57.549261   43974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:29:57.549578   43974 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:29:57.549846   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:29:57.583834   43974 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 19:29:57.585031   43974 start.go:297] selected driver: kvm2
	I0818 19:29:57.585048   43974 start.go:901] validating driver "kvm2" against &{Name:multinode-048993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-048993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:29:57.585175   43974 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:29:57.585505   43974 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:29:57.585589   43974 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:29:57.599736   43974 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:29:57.600414   43974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:29:57.600485   43974 cni.go:84] Creating CNI manager for ""
	I0818 19:29:57.600500   43974 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 19:29:57.600553   43974 start.go:340] cluster config:
	{Name:multinode-048993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-048993 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:29:57.600682   43974 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:29:57.602560   43974 out.go:177] * Starting "multinode-048993" primary control-plane node in "multinode-048993" cluster
	I0818 19:29:57.603843   43974 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:29:57.603875   43974 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 19:29:57.603891   43974 cache.go:56] Caching tarball of preloaded images
	I0818 19:29:57.603979   43974 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:29:57.603995   43974 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 19:29:57.604102   43974 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/config.json ...
	I0818 19:29:57.604295   43974 start.go:360] acquireMachinesLock for multinode-048993: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:29:57.604336   43974 start.go:364] duration metric: took 23.842µs to acquireMachinesLock for "multinode-048993"
	I0818 19:29:57.604355   43974 start.go:96] Skipping create...Using existing machine configuration
	I0818 19:29:57.604364   43974 fix.go:54] fixHost starting: 
	I0818 19:29:57.604627   43974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:29:57.604657   43974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:29:57.618248   43974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33161
	I0818 19:29:57.618650   43974 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:29:57.619078   43974 main.go:141] libmachine: Using API Version  1
	I0818 19:29:57.619101   43974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:29:57.619439   43974 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:29:57.619618   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:29:57.619785   43974 main.go:141] libmachine: (multinode-048993) Calling .GetState
	I0818 19:29:57.621372   43974 fix.go:112] recreateIfNeeded on multinode-048993: state=Running err=<nil>
	W0818 19:29:57.621399   43974 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 19:29:57.623980   43974 out.go:177] * Updating the running kvm2 "multinode-048993" VM ...
	I0818 19:29:57.625184   43974 machine.go:93] provisionDockerMachine start ...
	I0818 19:29:57.625199   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:29:57.625394   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:57.628235   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.628640   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:57.628662   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.628836   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:57.629000   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.629151   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.629272   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:57.629432   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:29:57.629726   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:29:57.629742   43974 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 19:29:57.748566   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-048993
	
	I0818 19:29:57.748596   43974 main.go:141] libmachine: (multinode-048993) Calling .GetMachineName
	I0818 19:29:57.748904   43974 buildroot.go:166] provisioning hostname "multinode-048993"
	I0818 19:29:57.748928   43974 main.go:141] libmachine: (multinode-048993) Calling .GetMachineName
	I0818 19:29:57.749112   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:57.752039   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.752505   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:57.752528   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.752676   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:57.752849   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.753000   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.753146   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:57.753285   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:29:57.753485   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:29:57.753503   43974 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-048993 && echo "multinode-048993" | sudo tee /etc/hostname
	I0818 19:29:57.879922   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-048993
	
	I0818 19:29:57.879955   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:57.883044   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.883488   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:57.883527   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.883687   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:57.883858   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.884028   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.884214   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:57.884406   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:29:57.884569   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:29:57.884583   43974 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-048993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-048993/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-048993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 19:29:57.996321   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:29:57.996353   43974 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 19:29:57.996396   43974 buildroot.go:174] setting up certificates
	I0818 19:29:57.996408   43974 provision.go:84] configureAuth start
	I0818 19:29:57.996423   43974 main.go:141] libmachine: (multinode-048993) Calling .GetMachineName
	I0818 19:29:57.996664   43974 main.go:141] libmachine: (multinode-048993) Calling .GetIP
	I0818 19:29:57.999212   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.999646   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:57.999674   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.999810   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:58.001996   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.002408   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:58.002436   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.002560   43974 provision.go:143] copyHostCerts
	I0818 19:29:58.002590   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:29:58.002628   43974 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 19:29:58.002648   43974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:29:58.002731   43974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 19:29:58.002842   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:29:58.002867   43974 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 19:29:58.002874   43974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:29:58.002910   43974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 19:29:58.002987   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:29:58.003011   43974 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 19:29:58.003020   43974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:29:58.003053   43974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 19:29:58.003133   43974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.multinode-048993 san=[127.0.0.1 192.168.39.185 localhost minikube multinode-048993]
	I0818 19:29:58.366644   43974 provision.go:177] copyRemoteCerts
	I0818 19:29:58.366703   43974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 19:29:58.366749   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:58.369136   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.369435   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:58.369467   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.369593   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:58.369809   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:58.369963   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:58.370140   43974 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993/id_rsa Username:docker}
	I0818 19:29:58.459730   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 19:29:58.459809   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 19:29:58.488777   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 19:29:58.488858   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0818 19:29:58.519175   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 19:29:58.519238   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 19:29:58.544041   43974 provision.go:87] duration metric: took 547.61858ms to configureAuth
	I0818 19:29:58.544065   43974 buildroot.go:189] setting minikube options for container-runtime
	I0818 19:29:58.544282   43974 config.go:182] Loaded profile config "multinode-048993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:29:58.544380   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:58.547279   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.547708   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:58.547733   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.547916   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:58.548105   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:58.548402   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:58.548573   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:58.548752   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:29:58.548941   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:29:58.548959   43974 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 19:31:29.251045   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 19:31:29.251100   43974 machine.go:96] duration metric: took 1m31.62590483s to provisionDockerMachine
	I0818 19:31:29.251127   43974 start.go:293] postStartSetup for "multinode-048993" (driver="kvm2")
	I0818 19:31:29.251155   43974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 19:31:29.251191   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.251659   43974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 19:31:29.251706   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:31:29.254718   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.255369   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.255423   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.255590   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:31:29.255812   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.255997   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:31:29.256138   43974 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993/id_rsa Username:docker}
	I0818 19:31:29.347154   43974 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 19:31:29.351316   43974 command_runner.go:130] > NAME=Buildroot
	I0818 19:31:29.351332   43974 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0818 19:31:29.351338   43974 command_runner.go:130] > ID=buildroot
	I0818 19:31:29.351345   43974 command_runner.go:130] > VERSION_ID=2023.02.9
	I0818 19:31:29.351352   43974 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0818 19:31:29.351484   43974 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 19:31:29.351510   43974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 19:31:29.351568   43974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 19:31:29.351638   43974 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 19:31:29.351649   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 19:31:29.351727   43974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 19:31:29.361136   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:31:29.388847   43974 start.go:296] duration metric: took 137.705611ms for postStartSetup
	I0818 19:31:29.388897   43974 fix.go:56] duration metric: took 1m31.784533308s for fixHost
	I0818 19:31:29.388923   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:31:29.391760   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.392114   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.392140   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.392289   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:31:29.392479   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.392671   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.392805   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:31:29.392977   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:31:29.393143   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:31:29.393152   43974 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 19:31:29.504365   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724009489.482227779
	
	I0818 19:31:29.504387   43974 fix.go:216] guest clock: 1724009489.482227779
	I0818 19:31:29.504404   43974 fix.go:229] Guest: 2024-08-18 19:31:29.482227779 +0000 UTC Remote: 2024-08-18 19:31:29.388907208 +0000 UTC m=+91.905779220 (delta=93.320571ms)
	I0818 19:31:29.504458   43974 fix.go:200] guest clock delta is within tolerance: 93.320571ms
	I0818 19:31:29.504466   43974 start.go:83] releasing machines lock for "multinode-048993", held for 1m31.900116845s
	I0818 19:31:29.504496   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.504761   43974 main.go:141] libmachine: (multinode-048993) Calling .GetIP
	I0818 19:31:29.507643   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.508016   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.508044   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.508305   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.508806   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.508996   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.509088   43974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 19:31:29.509138   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:31:29.509235   43974 ssh_runner.go:195] Run: cat /version.json
	I0818 19:31:29.509264   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:31:29.512122   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.512224   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.512521   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.512547   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.512574   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.512595   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.512661   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:31:29.512842   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:31:29.512848   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.513038   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.513043   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:31:29.513197   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:31:29.513234   43974 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993/id_rsa Username:docker}
	I0818 19:31:29.513317   43974 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993/id_rsa Username:docker}
	I0818 19:31:29.592430   43974 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0818 19:31:29.592681   43974 ssh_runner.go:195] Run: systemctl --version
	I0818 19:31:29.614467   43974 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0818 19:31:29.615119   43974 command_runner.go:130] > systemd 252 (252)
	I0818 19:31:29.615156   43974 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0818 19:31:29.615221   43974 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 19:31:29.776643   43974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 19:31:29.783724   43974 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0818 19:31:29.784148   43974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 19:31:29.784235   43974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 19:31:29.793474   43974 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0818 19:31:29.793491   43974 start.go:495] detecting cgroup driver to use...
	I0818 19:31:29.793560   43974 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 19:31:29.809056   43974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 19:31:29.822374   43974 docker.go:217] disabling cri-docker service (if available) ...
	I0818 19:31:29.822440   43974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 19:31:29.836427   43974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 19:31:29.849590   43974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 19:31:29.994616   43974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 19:31:30.148204   43974 docker.go:233] disabling docker service ...
	I0818 19:31:30.148277   43974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 19:31:30.166055   43974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 19:31:30.179750   43974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 19:31:30.323754   43974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 19:31:30.465888   43974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 19:31:30.479527   43974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 19:31:30.498590   43974 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0818 19:31:30.499033   43974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 19:31:30.499081   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.509619   43974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 19:31:30.509684   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.519730   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.529701   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.539467   43974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 19:31:30.549664   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.559364   43974 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.570461   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.580154   43974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 19:31:30.588842   43974 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0818 19:31:30.588894   43974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 19:31:30.597682   43974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:31:30.734471   43974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 19:31:38.400824   43974 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.666319231s)
	I0818 19:31:38.400857   43974 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 19:31:38.400908   43974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 19:31:38.406571   43974 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0818 19:31:38.406589   43974 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0818 19:31:38.406595   43974 command_runner.go:130] > Device: 0,22	Inode: 1333        Links: 1
	I0818 19:31:38.406602   43974 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0818 19:31:38.406608   43974 command_runner.go:130] > Access: 2024-08-18 19:31:38.302002172 +0000
	I0818 19:31:38.406620   43974 command_runner.go:130] > Modify: 2024-08-18 19:31:38.266001388 +0000
	I0818 19:31:38.406628   43974 command_runner.go:130] > Change: 2024-08-18 19:31:38.266001388 +0000
	I0818 19:31:38.406633   43974 command_runner.go:130] >  Birth: -
	I0818 19:31:38.406923   43974 start.go:563] Will wait 60s for crictl version
	I0818 19:31:38.406973   43974 ssh_runner.go:195] Run: which crictl
	I0818 19:31:38.410934   43974 command_runner.go:130] > /usr/bin/crictl
	I0818 19:31:38.411031   43974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 19:31:38.448335   43974 command_runner.go:130] > Version:  0.1.0
	I0818 19:31:38.448357   43974 command_runner.go:130] > RuntimeName:  cri-o
	I0818 19:31:38.448364   43974 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0818 19:31:38.448371   43974 command_runner.go:130] > RuntimeApiVersion:  v1
	I0818 19:31:38.448557   43974 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 19:31:38.448639   43974 ssh_runner.go:195] Run: crio --version
	I0818 19:31:38.476483   43974 command_runner.go:130] > crio version 1.29.1
	I0818 19:31:38.476504   43974 command_runner.go:130] > Version:        1.29.1
	I0818 19:31:38.476513   43974 command_runner.go:130] > GitCommit:      unknown
	I0818 19:31:38.476519   43974 command_runner.go:130] > GitCommitDate:  unknown
	I0818 19:31:38.476525   43974 command_runner.go:130] > GitTreeState:   clean
	I0818 19:31:38.476532   43974 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0818 19:31:38.476538   43974 command_runner.go:130] > GoVersion:      go1.21.6
	I0818 19:31:38.476544   43974 command_runner.go:130] > Compiler:       gc
	I0818 19:31:38.476551   43974 command_runner.go:130] > Platform:       linux/amd64
	I0818 19:31:38.476556   43974 command_runner.go:130] > Linkmode:       dynamic
	I0818 19:31:38.476564   43974 command_runner.go:130] > BuildTags:      
	I0818 19:31:38.476571   43974 command_runner.go:130] >   containers_image_ostree_stub
	I0818 19:31:38.476581   43974 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0818 19:31:38.476591   43974 command_runner.go:130] >   btrfs_noversion
	I0818 19:31:38.476600   43974 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0818 19:31:38.476613   43974 command_runner.go:130] >   libdm_no_deferred_remove
	I0818 19:31:38.476623   43974 command_runner.go:130] >   seccomp
	I0818 19:31:38.476634   43974 command_runner.go:130] > LDFlags:          unknown
	I0818 19:31:38.476642   43974 command_runner.go:130] > SeccompEnabled:   true
	I0818 19:31:38.476649   43974 command_runner.go:130] > AppArmorEnabled:  false
	I0818 19:31:38.476721   43974 ssh_runner.go:195] Run: crio --version
	I0818 19:31:38.505411   43974 command_runner.go:130] > crio version 1.29.1
	I0818 19:31:38.505436   43974 command_runner.go:130] > Version:        1.29.1
	I0818 19:31:38.505443   43974 command_runner.go:130] > GitCommit:      unknown
	I0818 19:31:38.505447   43974 command_runner.go:130] > GitCommitDate:  unknown
	I0818 19:31:38.505452   43974 command_runner.go:130] > GitTreeState:   clean
	I0818 19:31:38.505460   43974 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0818 19:31:38.505467   43974 command_runner.go:130] > GoVersion:      go1.21.6
	I0818 19:31:38.505474   43974 command_runner.go:130] > Compiler:       gc
	I0818 19:31:38.505481   43974 command_runner.go:130] > Platform:       linux/amd64
	I0818 19:31:38.505487   43974 command_runner.go:130] > Linkmode:       dynamic
	I0818 19:31:38.505496   43974 command_runner.go:130] > BuildTags:      
	I0818 19:31:38.505520   43974 command_runner.go:130] >   containers_image_ostree_stub
	I0818 19:31:38.505528   43974 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0818 19:31:38.505532   43974 command_runner.go:130] >   btrfs_noversion
	I0818 19:31:38.505536   43974 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0818 19:31:38.505543   43974 command_runner.go:130] >   libdm_no_deferred_remove
	I0818 19:31:38.505552   43974 command_runner.go:130] >   seccomp
	I0818 19:31:38.505562   43974 command_runner.go:130] > LDFlags:          unknown
	I0818 19:31:38.505572   43974 command_runner.go:130] > SeccompEnabled:   true
	I0818 19:31:38.505583   43974 command_runner.go:130] > AppArmorEnabled:  false
	I0818 19:31:38.508632   43974 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 19:31:38.509980   43974 main.go:141] libmachine: (multinode-048993) Calling .GetIP
	I0818 19:31:38.512741   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:38.513180   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:38.513214   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:38.513428   43974 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 19:31:38.518811   43974 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0818 19:31:38.518915   43974 kubeadm.go:883] updating cluster {Name:multinode-048993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-048993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 19:31:38.519056   43974 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:31:38.519098   43974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:31:38.567557   43974 command_runner.go:130] > {
	I0818 19:31:38.567583   43974 command_runner.go:130] >   "images": [
	I0818 19:31:38.567587   43974 command_runner.go:130] >     {
	I0818 19:31:38.567595   43974 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0818 19:31:38.567602   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567608   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0818 19:31:38.567611   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567615   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567624   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0818 19:31:38.567631   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0818 19:31:38.567634   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567639   43974 command_runner.go:130] >       "size": "87165492",
	I0818 19:31:38.567643   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567647   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.567652   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567656   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567660   43974 command_runner.go:130] >     },
	I0818 19:31:38.567664   43974 command_runner.go:130] >     {
	I0818 19:31:38.567669   43974 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0818 19:31:38.567674   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567679   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0818 19:31:38.567686   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567690   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567697   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0818 19:31:38.567707   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0818 19:31:38.567710   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567714   43974 command_runner.go:130] >       "size": "87190579",
	I0818 19:31:38.567718   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567725   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.567729   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567733   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567738   43974 command_runner.go:130] >     },
	I0818 19:31:38.567741   43974 command_runner.go:130] >     {
	I0818 19:31:38.567747   43974 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0818 19:31:38.567753   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567758   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0818 19:31:38.567761   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567765   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567774   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0818 19:31:38.567781   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0818 19:31:38.567785   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567790   43974 command_runner.go:130] >       "size": "1363676",
	I0818 19:31:38.567796   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567800   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.567806   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567810   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567813   43974 command_runner.go:130] >     },
	I0818 19:31:38.567816   43974 command_runner.go:130] >     {
	I0818 19:31:38.567822   43974 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0818 19:31:38.567829   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567835   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0818 19:31:38.567841   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567844   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567852   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0818 19:31:38.567864   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0818 19:31:38.567870   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567874   43974 command_runner.go:130] >       "size": "31470524",
	I0818 19:31:38.567879   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567882   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.567887   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567893   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567897   43974 command_runner.go:130] >     },
	I0818 19:31:38.567902   43974 command_runner.go:130] >     {
	I0818 19:31:38.567908   43974 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0818 19:31:38.567915   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567920   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0818 19:31:38.567924   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567929   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567938   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0818 19:31:38.567945   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0818 19:31:38.567951   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567955   43974 command_runner.go:130] >       "size": "61245718",
	I0818 19:31:38.567959   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567963   43974 command_runner.go:130] >       "username": "nonroot",
	I0818 19:31:38.567967   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567972   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567977   43974 command_runner.go:130] >     },
	I0818 19:31:38.567982   43974 command_runner.go:130] >     {
	I0818 19:31:38.567989   43974 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0818 19:31:38.567995   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568001   43974 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0818 19:31:38.568006   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568011   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568019   43974 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0818 19:31:38.568026   43974 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0818 19:31:38.568032   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568036   43974 command_runner.go:130] >       "size": "149009664",
	I0818 19:31:38.568040   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568047   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.568051   43974 command_runner.go:130] >       },
	I0818 19:31:38.568055   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568060   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568065   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568070   43974 command_runner.go:130] >     },
	I0818 19:31:38.568073   43974 command_runner.go:130] >     {
	I0818 19:31:38.568079   43974 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0818 19:31:38.568084   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568089   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0818 19:31:38.568095   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568099   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568106   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0818 19:31:38.568115   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0818 19:31:38.568118   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568122   43974 command_runner.go:130] >       "size": "95233506",
	I0818 19:31:38.568126   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568130   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.568134   43974 command_runner.go:130] >       },
	I0818 19:31:38.568138   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568142   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568146   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568149   43974 command_runner.go:130] >     },
	I0818 19:31:38.568153   43974 command_runner.go:130] >     {
	I0818 19:31:38.568159   43974 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0818 19:31:38.568174   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568181   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0818 19:31:38.568185   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568189   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568205   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0818 19:31:38.568215   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0818 19:31:38.568218   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568223   43974 command_runner.go:130] >       "size": "89437512",
	I0818 19:31:38.568227   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568231   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.568236   43974 command_runner.go:130] >       },
	I0818 19:31:38.568240   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568244   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568247   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568250   43974 command_runner.go:130] >     },
	I0818 19:31:38.568254   43974 command_runner.go:130] >     {
	I0818 19:31:38.568259   43974 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0818 19:31:38.568263   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568267   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0818 19:31:38.568271   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568274   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568281   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0818 19:31:38.568288   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0818 19:31:38.568291   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568295   43974 command_runner.go:130] >       "size": "92728217",
	I0818 19:31:38.568299   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.568302   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568308   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568312   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568317   43974 command_runner.go:130] >     },
	I0818 19:31:38.568320   43974 command_runner.go:130] >     {
	I0818 19:31:38.568326   43974 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0818 19:31:38.568332   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568336   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0818 19:31:38.568342   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568346   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568353   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0818 19:31:38.568362   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0818 19:31:38.568365   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568370   43974 command_runner.go:130] >       "size": "68420936",
	I0818 19:31:38.568375   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568379   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.568385   43974 command_runner.go:130] >       },
	I0818 19:31:38.568388   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568392   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568396   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568399   43974 command_runner.go:130] >     },
	I0818 19:31:38.568403   43974 command_runner.go:130] >     {
	I0818 19:31:38.568410   43974 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0818 19:31:38.568414   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568419   43974 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0818 19:31:38.568424   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568428   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568434   43974 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0818 19:31:38.568443   43974 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0818 19:31:38.568447   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568451   43974 command_runner.go:130] >       "size": "742080",
	I0818 19:31:38.568455   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568459   43974 command_runner.go:130] >         "value": "65535"
	I0818 19:31:38.568462   43974 command_runner.go:130] >       },
	I0818 19:31:38.568466   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568472   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568476   43974 command_runner.go:130] >       "pinned": true
	I0818 19:31:38.568479   43974 command_runner.go:130] >     }
	I0818 19:31:38.568482   43974 command_runner.go:130] >   ]
	I0818 19:31:38.568485   43974 command_runner.go:130] > }
	I0818 19:31:38.569253   43974 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:31:38.569269   43974 crio.go:433] Images already preloaded, skipping extraction
	I0818 19:31:38.569318   43974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:31:38.604688   43974 command_runner.go:130] > {
	I0818 19:31:38.604717   43974 command_runner.go:130] >   "images": [
	I0818 19:31:38.604723   43974 command_runner.go:130] >     {
	I0818 19:31:38.604735   43974 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0818 19:31:38.604743   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.604752   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0818 19:31:38.604758   43974 command_runner.go:130] >       ],
	I0818 19:31:38.604764   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.604776   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0818 19:31:38.604787   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0818 19:31:38.604793   43974 command_runner.go:130] >       ],
	I0818 19:31:38.604801   43974 command_runner.go:130] >       "size": "87165492",
	I0818 19:31:38.604810   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.604816   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.604826   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.604836   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.604841   43974 command_runner.go:130] >     },
	I0818 19:31:38.604847   43974 command_runner.go:130] >     {
	I0818 19:31:38.604856   43974 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0818 19:31:38.604867   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.604903   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0818 19:31:38.604914   43974 command_runner.go:130] >       ],
	I0818 19:31:38.604920   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.604932   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0818 19:31:38.604947   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0818 19:31:38.604956   43974 command_runner.go:130] >       ],
	I0818 19:31:38.604963   43974 command_runner.go:130] >       "size": "87190579",
	I0818 19:31:38.604972   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.604992   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605002   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605011   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605019   43974 command_runner.go:130] >     },
	I0818 19:31:38.605027   43974 command_runner.go:130] >     {
	I0818 19:31:38.605037   43974 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0818 19:31:38.605047   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605059   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0818 19:31:38.605068   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605076   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605090   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0818 19:31:38.605103   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0818 19:31:38.605107   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605112   43974 command_runner.go:130] >       "size": "1363676",
	I0818 19:31:38.605115   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.605119   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605123   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605130   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605134   43974 command_runner.go:130] >     },
	I0818 19:31:38.605139   43974 command_runner.go:130] >     {
	I0818 19:31:38.605145   43974 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0818 19:31:38.605152   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605157   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0818 19:31:38.605163   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605168   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605178   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0818 19:31:38.605190   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0818 19:31:38.605196   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605200   43974 command_runner.go:130] >       "size": "31470524",
	I0818 19:31:38.605207   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.605219   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605225   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605229   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605235   43974 command_runner.go:130] >     },
	I0818 19:31:38.605238   43974 command_runner.go:130] >     {
	I0818 19:31:38.605246   43974 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0818 19:31:38.605250   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605262   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0818 19:31:38.605270   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605280   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605294   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0818 19:31:38.605308   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0818 19:31:38.605316   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605326   43974 command_runner.go:130] >       "size": "61245718",
	I0818 19:31:38.605333   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.605339   43974 command_runner.go:130] >       "username": "nonroot",
	I0818 19:31:38.605346   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605350   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605356   43974 command_runner.go:130] >     },
	I0818 19:31:38.605361   43974 command_runner.go:130] >     {
	I0818 19:31:38.605369   43974 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0818 19:31:38.605376   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605381   43974 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0818 19:31:38.605386   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605391   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605399   43974 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0818 19:31:38.605408   43974 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0818 19:31:38.605415   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605419   43974 command_runner.go:130] >       "size": "149009664",
	I0818 19:31:38.605426   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605430   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.605436   43974 command_runner.go:130] >       },
	I0818 19:31:38.605441   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605446   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605450   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605456   43974 command_runner.go:130] >     },
	I0818 19:31:38.605459   43974 command_runner.go:130] >     {
	I0818 19:31:38.605468   43974 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0818 19:31:38.605474   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605479   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0818 19:31:38.605484   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605488   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605497   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0818 19:31:38.605508   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0818 19:31:38.605514   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605518   43974 command_runner.go:130] >       "size": "95233506",
	I0818 19:31:38.605523   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605528   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.605533   43974 command_runner.go:130] >       },
	I0818 19:31:38.605537   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605543   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605548   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605553   43974 command_runner.go:130] >     },
	I0818 19:31:38.605557   43974 command_runner.go:130] >     {
	I0818 19:31:38.605565   43974 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0818 19:31:38.605572   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605577   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0818 19:31:38.605583   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605587   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605603   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0818 19:31:38.605613   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0818 19:31:38.605617   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605620   43974 command_runner.go:130] >       "size": "89437512",
	I0818 19:31:38.605624   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605627   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.605631   43974 command_runner.go:130] >       },
	I0818 19:31:38.605635   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605639   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605642   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605646   43974 command_runner.go:130] >     },
	I0818 19:31:38.605649   43974 command_runner.go:130] >     {
	I0818 19:31:38.605655   43974 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0818 19:31:38.605658   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605664   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0818 19:31:38.605667   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605672   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605679   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0818 19:31:38.605687   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0818 19:31:38.605693   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605697   43974 command_runner.go:130] >       "size": "92728217",
	I0818 19:31:38.605702   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.605707   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605712   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605716   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605719   43974 command_runner.go:130] >     },
	I0818 19:31:38.605724   43974 command_runner.go:130] >     {
	I0818 19:31:38.605730   43974 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0818 19:31:38.605734   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605739   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0818 19:31:38.605743   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605747   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605756   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0818 19:31:38.605765   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0818 19:31:38.605771   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605775   43974 command_runner.go:130] >       "size": "68420936",
	I0818 19:31:38.605781   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605784   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.605790   43974 command_runner.go:130] >       },
	I0818 19:31:38.605794   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605800   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605804   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605809   43974 command_runner.go:130] >     },
	I0818 19:31:38.605813   43974 command_runner.go:130] >     {
	I0818 19:31:38.605821   43974 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0818 19:31:38.605825   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605832   43974 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0818 19:31:38.605835   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605839   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605846   43974 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0818 19:31:38.605855   43974 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0818 19:31:38.605861   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605865   43974 command_runner.go:130] >       "size": "742080",
	I0818 19:31:38.605871   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605877   43974 command_runner.go:130] >         "value": "65535"
	I0818 19:31:38.605882   43974 command_runner.go:130] >       },
	I0818 19:31:38.605886   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605892   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605896   43974 command_runner.go:130] >       "pinned": true
	I0818 19:31:38.605902   43974 command_runner.go:130] >     }
	I0818 19:31:38.605905   43974 command_runner.go:130] >   ]
	I0818 19:31:38.605909   43974 command_runner.go:130] > }
	I0818 19:31:38.606062   43974 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:31:38.606077   43974 cache_images.go:84] Images are preloaded, skipping loading
	I0818 19:31:38.606089   43974 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.0 crio true true} ...
	I0818 19:31:38.606223   43974 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-048993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-048993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 19:31:38.606305   43974 ssh_runner.go:195] Run: crio config
	I0818 19:31:38.639285   43974 command_runner.go:130] ! time="2024-08-18 19:31:38.616742718Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0818 19:31:38.645727   43974 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0818 19:31:38.653313   43974 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0818 19:31:38.653341   43974 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0818 19:31:38.653351   43974 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0818 19:31:38.653356   43974 command_runner.go:130] > #
	I0818 19:31:38.653378   43974 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0818 19:31:38.653391   43974 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0818 19:31:38.653404   43974 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0818 19:31:38.653417   43974 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0818 19:31:38.653425   43974 command_runner.go:130] > # reload'.
	I0818 19:31:38.653435   43974 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0818 19:31:38.653448   43974 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0818 19:31:38.653460   43974 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0818 19:31:38.653471   43974 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0818 19:31:38.653480   43974 command_runner.go:130] > [crio]
	I0818 19:31:38.653492   43974 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0818 19:31:38.653502   43974 command_runner.go:130] > # containers images, in this directory.
	I0818 19:31:38.653512   43974 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0818 19:31:38.653539   43974 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0818 19:31:38.653550   43974 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0818 19:31:38.653561   43974 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0818 19:31:38.653570   43974 command_runner.go:130] > # imagestore = ""
	I0818 19:31:38.653583   43974 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0818 19:31:38.653596   43974 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0818 19:31:38.653606   43974 command_runner.go:130] > storage_driver = "overlay"
	I0818 19:31:38.653618   43974 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0818 19:31:38.653633   43974 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0818 19:31:38.653641   43974 command_runner.go:130] > storage_option = [
	I0818 19:31:38.653649   43974 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0818 19:31:38.653652   43974 command_runner.go:130] > ]
	I0818 19:31:38.653660   43974 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0818 19:31:38.653668   43974 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0818 19:31:38.653675   43974 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0818 19:31:38.653681   43974 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0818 19:31:38.653689   43974 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0818 19:31:38.653696   43974 command_runner.go:130] > # always happen on a node reboot
	I0818 19:31:38.653700   43974 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0818 19:31:38.653712   43974 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0818 19:31:38.653719   43974 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0818 19:31:38.653726   43974 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0818 19:31:38.653731   43974 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0818 19:31:38.653740   43974 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0818 19:31:38.653749   43974 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0818 19:31:38.653755   43974 command_runner.go:130] > # internal_wipe = true
	I0818 19:31:38.653763   43974 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0818 19:31:38.653770   43974 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0818 19:31:38.653775   43974 command_runner.go:130] > # internal_repair = false
	I0818 19:31:38.653782   43974 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0818 19:31:38.653791   43974 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0818 19:31:38.653798   43974 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0818 19:31:38.653803   43974 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0818 19:31:38.653811   43974 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0818 19:31:38.653817   43974 command_runner.go:130] > [crio.api]
	I0818 19:31:38.653823   43974 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0818 19:31:38.653829   43974 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0818 19:31:38.653835   43974 command_runner.go:130] > # IP address on which the stream server will listen.
	I0818 19:31:38.653841   43974 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0818 19:31:38.653847   43974 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0818 19:31:38.653854   43974 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0818 19:31:38.653857   43974 command_runner.go:130] > # stream_port = "0"
	I0818 19:31:38.653862   43974 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0818 19:31:38.653868   43974 command_runner.go:130] > # stream_enable_tls = false
	I0818 19:31:38.653874   43974 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0818 19:31:38.653880   43974 command_runner.go:130] > # stream_idle_timeout = ""
	I0818 19:31:38.653886   43974 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0818 19:31:38.653893   43974 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0818 19:31:38.653898   43974 command_runner.go:130] > # minutes.
	I0818 19:31:38.653902   43974 command_runner.go:130] > # stream_tls_cert = ""
	I0818 19:31:38.653910   43974 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0818 19:31:38.653918   43974 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0818 19:31:38.653924   43974 command_runner.go:130] > # stream_tls_key = ""
	I0818 19:31:38.653929   43974 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0818 19:31:38.653937   43974 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0818 19:31:38.653951   43974 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0818 19:31:38.653958   43974 command_runner.go:130] > # stream_tls_ca = ""
	I0818 19:31:38.653965   43974 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0818 19:31:38.653971   43974 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0818 19:31:38.653979   43974 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0818 19:31:38.653987   43974 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0818 19:31:38.653995   43974 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0818 19:31:38.654003   43974 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0818 19:31:38.654007   43974 command_runner.go:130] > [crio.runtime]
	I0818 19:31:38.654013   43974 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0818 19:31:38.654019   43974 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0818 19:31:38.654023   43974 command_runner.go:130] > # "nofile=1024:2048"
	I0818 19:31:38.654031   43974 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0818 19:31:38.654035   43974 command_runner.go:130] > # default_ulimits = [
	I0818 19:31:38.654041   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654047   43974 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0818 19:31:38.654053   43974 command_runner.go:130] > # no_pivot = false
	I0818 19:31:38.654059   43974 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0818 19:31:38.654067   43974 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0818 19:31:38.654074   43974 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0818 19:31:38.654079   43974 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0818 19:31:38.654086   43974 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0818 19:31:38.654092   43974 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0818 19:31:38.654099   43974 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0818 19:31:38.654103   43974 command_runner.go:130] > # Cgroup setting for conmon
	I0818 19:31:38.654111   43974 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0818 19:31:38.654119   43974 command_runner.go:130] > conmon_cgroup = "pod"
	I0818 19:31:38.654128   43974 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0818 19:31:38.654133   43974 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0818 19:31:38.654142   43974 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0818 19:31:38.654148   43974 command_runner.go:130] > conmon_env = [
	I0818 19:31:38.654154   43974 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0818 19:31:38.654159   43974 command_runner.go:130] > ]
	I0818 19:31:38.654164   43974 command_runner.go:130] > # Additional environment variables to set for all the
	I0818 19:31:38.654171   43974 command_runner.go:130] > # containers. These are overridden if set in the
	I0818 19:31:38.654177   43974 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0818 19:31:38.654183   43974 command_runner.go:130] > # default_env = [
	I0818 19:31:38.654186   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654192   43974 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0818 19:31:38.654203   43974 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0818 19:31:38.654210   43974 command_runner.go:130] > # selinux = false
	I0818 19:31:38.654216   43974 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0818 19:31:38.654223   43974 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0818 19:31:38.654229   43974 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0818 19:31:38.654235   43974 command_runner.go:130] > # seccomp_profile = ""
	I0818 19:31:38.654240   43974 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0818 19:31:38.654247   43974 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0818 19:31:38.654257   43974 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0818 19:31:38.654266   43974 command_runner.go:130] > # which might increase security.
	I0818 19:31:38.654277   43974 command_runner.go:130] > # This option is currently deprecated,
	I0818 19:31:38.654290   43974 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0818 19:31:38.654301   43974 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0818 19:31:38.654314   43974 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0818 19:31:38.654326   43974 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0818 19:31:38.654339   43974 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0818 19:31:38.654351   43974 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0818 19:31:38.654361   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.654372   43974 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0818 19:31:38.654380   43974 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0818 19:31:38.654387   43974 command_runner.go:130] > # the cgroup blockio controller.
	I0818 19:31:38.654392   43974 command_runner.go:130] > # blockio_config_file = ""
	I0818 19:31:38.654400   43974 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0818 19:31:38.654406   43974 command_runner.go:130] > # blockio parameters.
	I0818 19:31:38.654410   43974 command_runner.go:130] > # blockio_reload = false
	I0818 19:31:38.654418   43974 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0818 19:31:38.654421   43974 command_runner.go:130] > # irqbalance daemon.
	I0818 19:31:38.654428   43974 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0818 19:31:38.654435   43974 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0818 19:31:38.654443   43974 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0818 19:31:38.654451   43974 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0818 19:31:38.654459   43974 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0818 19:31:38.654465   43974 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0818 19:31:38.654472   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.654476   43974 command_runner.go:130] > # rdt_config_file = ""
	I0818 19:31:38.654482   43974 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0818 19:31:38.654488   43974 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0818 19:31:38.654502   43974 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0818 19:31:38.654509   43974 command_runner.go:130] > # separate_pull_cgroup = ""
	I0818 19:31:38.654515   43974 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0818 19:31:38.654523   43974 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0818 19:31:38.654529   43974 command_runner.go:130] > # will be added.
	I0818 19:31:38.654533   43974 command_runner.go:130] > # default_capabilities = [
	I0818 19:31:38.654539   43974 command_runner.go:130] > # 	"CHOWN",
	I0818 19:31:38.654544   43974 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0818 19:31:38.654549   43974 command_runner.go:130] > # 	"FSETID",
	I0818 19:31:38.654554   43974 command_runner.go:130] > # 	"FOWNER",
	I0818 19:31:38.654560   43974 command_runner.go:130] > # 	"SETGID",
	I0818 19:31:38.654564   43974 command_runner.go:130] > # 	"SETUID",
	I0818 19:31:38.654570   43974 command_runner.go:130] > # 	"SETPCAP",
	I0818 19:31:38.654574   43974 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0818 19:31:38.654581   43974 command_runner.go:130] > # 	"KILL",
	I0818 19:31:38.654585   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654594   43974 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0818 19:31:38.654602   43974 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0818 19:31:38.654609   43974 command_runner.go:130] > # add_inheritable_capabilities = false
	I0818 19:31:38.654615   43974 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0818 19:31:38.654622   43974 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0818 19:31:38.654626   43974 command_runner.go:130] > default_sysctls = [
	I0818 19:31:38.654631   43974 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0818 19:31:38.654638   43974 command_runner.go:130] > ]
	I0818 19:31:38.654643   43974 command_runner.go:130] > # List of devices on the host that a
	I0818 19:31:38.654651   43974 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0818 19:31:38.654655   43974 command_runner.go:130] > # allowed_devices = [
	I0818 19:31:38.654659   43974 command_runner.go:130] > # 	"/dev/fuse",
	I0818 19:31:38.654662   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654666   43974 command_runner.go:130] > # List of additional devices. specified as
	I0818 19:31:38.654676   43974 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0818 19:31:38.654683   43974 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0818 19:31:38.654688   43974 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0818 19:31:38.654694   43974 command_runner.go:130] > # additional_devices = [
	I0818 19:31:38.654698   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654704   43974 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0818 19:31:38.654710   43974 command_runner.go:130] > # cdi_spec_dirs = [
	I0818 19:31:38.654714   43974 command_runner.go:130] > # 	"/etc/cdi",
	I0818 19:31:38.654720   43974 command_runner.go:130] > # 	"/var/run/cdi",
	I0818 19:31:38.654723   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654731   43974 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0818 19:31:38.654738   43974 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0818 19:31:38.654745   43974 command_runner.go:130] > # Defaults to false.
	I0818 19:31:38.654749   43974 command_runner.go:130] > # device_ownership_from_security_context = false
	I0818 19:31:38.654757   43974 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0818 19:31:38.654764   43974 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0818 19:31:38.654768   43974 command_runner.go:130] > # hooks_dir = [
	I0818 19:31:38.654773   43974 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0818 19:31:38.654779   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654785   43974 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0818 19:31:38.654792   43974 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0818 19:31:38.654799   43974 command_runner.go:130] > # its default mounts from the following two files:
	I0818 19:31:38.654802   43974 command_runner.go:130] > #
	I0818 19:31:38.654808   43974 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0818 19:31:38.654816   43974 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0818 19:31:38.654822   43974 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0818 19:31:38.654827   43974 command_runner.go:130] > #
	I0818 19:31:38.654833   43974 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0818 19:31:38.654839   43974 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0818 19:31:38.654849   43974 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0818 19:31:38.654855   43974 command_runner.go:130] > #      only add mounts it finds in this file.
	I0818 19:31:38.654858   43974 command_runner.go:130] > #
	I0818 19:31:38.654863   43974 command_runner.go:130] > # default_mounts_file = ""
	I0818 19:31:38.654870   43974 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0818 19:31:38.654876   43974 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0818 19:31:38.654882   43974 command_runner.go:130] > pids_limit = 1024
	I0818 19:31:38.654889   43974 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0818 19:31:38.654897   43974 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0818 19:31:38.654903   43974 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0818 19:31:38.654913   43974 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0818 19:31:38.654919   43974 command_runner.go:130] > # log_size_max = -1
	I0818 19:31:38.654925   43974 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0818 19:31:38.654931   43974 command_runner.go:130] > # log_to_journald = false
	I0818 19:31:38.654937   43974 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0818 19:31:38.654944   43974 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0818 19:31:38.654949   43974 command_runner.go:130] > # Path to directory for container attach sockets.
	I0818 19:31:38.654956   43974 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0818 19:31:38.654962   43974 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0818 19:31:38.654968   43974 command_runner.go:130] > # bind_mount_prefix = ""
	I0818 19:31:38.654974   43974 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0818 19:31:38.654979   43974 command_runner.go:130] > # read_only = false
	I0818 19:31:38.654985   43974 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0818 19:31:38.654994   43974 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0818 19:31:38.655000   43974 command_runner.go:130] > # live configuration reload.
	I0818 19:31:38.655005   43974 command_runner.go:130] > # log_level = "info"
	I0818 19:31:38.655013   43974 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0818 19:31:38.655019   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.655025   43974 command_runner.go:130] > # log_filter = ""
	I0818 19:31:38.655032   43974 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0818 19:31:38.655041   43974 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0818 19:31:38.655046   43974 command_runner.go:130] > # separated by comma.
	I0818 19:31:38.655053   43974 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0818 19:31:38.655059   43974 command_runner.go:130] > # uid_mappings = ""
	I0818 19:31:38.655065   43974 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0818 19:31:38.655073   43974 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0818 19:31:38.655078   43974 command_runner.go:130] > # separated by comma.
	I0818 19:31:38.655085   43974 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0818 19:31:38.655095   43974 command_runner.go:130] > # gid_mappings = ""
	I0818 19:31:38.655103   43974 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0818 19:31:38.655109   43974 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0818 19:31:38.655118   43974 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0818 19:31:38.655128   43974 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0818 19:31:38.655135   43974 command_runner.go:130] > # minimum_mappable_uid = -1
	I0818 19:31:38.655141   43974 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0818 19:31:38.655149   43974 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0818 19:31:38.655154   43974 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0818 19:31:38.655164   43974 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0818 19:31:38.655170   43974 command_runner.go:130] > # minimum_mappable_gid = -1
	I0818 19:31:38.655176   43974 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0818 19:31:38.655184   43974 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0818 19:31:38.655192   43974 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0818 19:31:38.655196   43974 command_runner.go:130] > # ctr_stop_timeout = 30
	I0818 19:31:38.655202   43974 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0818 19:31:38.655210   43974 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0818 19:31:38.655215   43974 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0818 19:31:38.655222   43974 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0818 19:31:38.655226   43974 command_runner.go:130] > drop_infra_ctr = false
	I0818 19:31:38.655233   43974 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0818 19:31:38.655239   43974 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0818 19:31:38.655249   43974 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0818 19:31:38.655256   43974 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0818 19:31:38.655266   43974 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0818 19:31:38.655278   43974 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0818 19:31:38.655294   43974 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0818 19:31:38.655305   43974 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0818 19:31:38.655313   43974 command_runner.go:130] > # shared_cpuset = ""
	I0818 19:31:38.655325   43974 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0818 19:31:38.655335   43974 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0818 19:31:38.655343   43974 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0818 19:31:38.655349   43974 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0818 19:31:38.655356   43974 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0818 19:31:38.655361   43974 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0818 19:31:38.655373   43974 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0818 19:31:38.655397   43974 command_runner.go:130] > # enable_criu_support = false
	I0818 19:31:38.655409   43974 command_runner.go:130] > # Enable/disable the generation of the container,
	I0818 19:31:38.655419   43974 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0818 19:31:38.655425   43974 command_runner.go:130] > # enable_pod_events = false
	I0818 19:31:38.655431   43974 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0818 19:31:38.655440   43974 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0818 19:31:38.655446   43974 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0818 19:31:38.655452   43974 command_runner.go:130] > # default_runtime = "runc"
	I0818 19:31:38.655457   43974 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0818 19:31:38.655466   43974 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0818 19:31:38.655477   43974 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0818 19:31:38.655484   43974 command_runner.go:130] > # creation as a file is not desired either.
	I0818 19:31:38.655491   43974 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0818 19:31:38.655498   43974 command_runner.go:130] > # the hostname is being managed dynamically.
	I0818 19:31:38.655503   43974 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0818 19:31:38.655509   43974 command_runner.go:130] > # ]
	I0818 19:31:38.655514   43974 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0818 19:31:38.655523   43974 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0818 19:31:38.655531   43974 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0818 19:31:38.655538   43974 command_runner.go:130] > # Each entry in the table should follow the format:
	I0818 19:31:38.655541   43974 command_runner.go:130] > #
	I0818 19:31:38.655546   43974 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0818 19:31:38.655553   43974 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0818 19:31:38.655572   43974 command_runner.go:130] > # runtime_type = "oci"
	I0818 19:31:38.655579   43974 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0818 19:31:38.655584   43974 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0818 19:31:38.655590   43974 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0818 19:31:38.655595   43974 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0818 19:31:38.655601   43974 command_runner.go:130] > # monitor_env = []
	I0818 19:31:38.655605   43974 command_runner.go:130] > # privileged_without_host_devices = false
	I0818 19:31:38.655611   43974 command_runner.go:130] > # allowed_annotations = []
	I0818 19:31:38.655617   43974 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0818 19:31:38.655622   43974 command_runner.go:130] > # Where:
	I0818 19:31:38.655627   43974 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0818 19:31:38.655635   43974 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0818 19:31:38.655643   43974 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0818 19:31:38.655651   43974 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0818 19:31:38.655654   43974 command_runner.go:130] > #   in $PATH.
	I0818 19:31:38.655661   43974 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0818 19:31:38.655668   43974 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0818 19:31:38.655673   43974 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0818 19:31:38.655679   43974 command_runner.go:130] > #   state.
	I0818 19:31:38.655685   43974 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0818 19:31:38.655693   43974 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0818 19:31:38.655700   43974 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0818 19:31:38.655705   43974 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0818 19:31:38.655713   43974 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0818 19:31:38.655719   43974 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0818 19:31:38.655727   43974 command_runner.go:130] > #   The currently recognized values are:
	I0818 19:31:38.655732   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0818 19:31:38.655741   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0818 19:31:38.655748   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0818 19:31:38.655754   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0818 19:31:38.655763   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0818 19:31:38.655769   43974 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0818 19:31:38.655777   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0818 19:31:38.655785   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0818 19:31:38.655791   43974 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0818 19:31:38.655799   43974 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0818 19:31:38.655806   43974 command_runner.go:130] > #   deprecated option "conmon".
	I0818 19:31:38.655812   43974 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0818 19:31:38.655819   43974 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0818 19:31:38.655825   43974 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0818 19:31:38.655832   43974 command_runner.go:130] > #   should be moved to the container's cgroup
	I0818 19:31:38.655838   43974 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0818 19:31:38.655845   43974 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0818 19:31:38.655851   43974 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0818 19:31:38.655858   43974 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0818 19:31:38.655861   43974 command_runner.go:130] > #
	I0818 19:31:38.655866   43974 command_runner.go:130] > # Using the seccomp notifier feature:
	I0818 19:31:38.655871   43974 command_runner.go:130] > #
	I0818 19:31:38.655876   43974 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0818 19:31:38.655884   43974 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0818 19:31:38.655888   43974 command_runner.go:130] > #
	I0818 19:31:38.655894   43974 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0818 19:31:38.655902   43974 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0818 19:31:38.655905   43974 command_runner.go:130] > #
	I0818 19:31:38.655911   43974 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0818 19:31:38.655916   43974 command_runner.go:130] > # feature.
	I0818 19:31:38.655919   43974 command_runner.go:130] > #
	I0818 19:31:38.655927   43974 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0818 19:31:38.655933   43974 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0818 19:31:38.655940   43974 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0818 19:31:38.655948   43974 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0818 19:31:38.655954   43974 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0818 19:31:38.655959   43974 command_runner.go:130] > #
	I0818 19:31:38.655965   43974 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0818 19:31:38.655973   43974 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0818 19:31:38.655977   43974 command_runner.go:130] > #
	I0818 19:31:38.655982   43974 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0818 19:31:38.655990   43974 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0818 19:31:38.655993   43974 command_runner.go:130] > #
	I0818 19:31:38.655998   43974 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0818 19:31:38.656006   43974 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0818 19:31:38.656011   43974 command_runner.go:130] > # limitation.
	I0818 19:31:38.656016   43974 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0818 19:31:38.656022   43974 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0818 19:31:38.656026   43974 command_runner.go:130] > runtime_type = "oci"
	I0818 19:31:38.656032   43974 command_runner.go:130] > runtime_root = "/run/runc"
	I0818 19:31:38.656036   43974 command_runner.go:130] > runtime_config_path = ""
	I0818 19:31:38.656043   43974 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0818 19:31:38.656047   43974 command_runner.go:130] > monitor_cgroup = "pod"
	I0818 19:31:38.656052   43974 command_runner.go:130] > monitor_exec_cgroup = ""
	I0818 19:31:38.656055   43974 command_runner.go:130] > monitor_env = [
	I0818 19:31:38.656061   43974 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0818 19:31:38.656065   43974 command_runner.go:130] > ]
	I0818 19:31:38.656070   43974 command_runner.go:130] > privileged_without_host_devices = false
	I0818 19:31:38.656078   43974 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0818 19:31:38.656084   43974 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0818 19:31:38.656090   43974 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0818 19:31:38.656099   43974 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0818 19:31:38.656109   43974 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0818 19:31:38.656117   43974 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0818 19:31:38.656130   43974 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0818 19:31:38.656139   43974 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0818 19:31:38.656145   43974 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0818 19:31:38.656152   43974 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0818 19:31:38.656155   43974 command_runner.go:130] > # Example:
	I0818 19:31:38.656160   43974 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0818 19:31:38.656164   43974 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0818 19:31:38.656168   43974 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0818 19:31:38.656173   43974 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0818 19:31:38.656176   43974 command_runner.go:130] > # cpuset = 0
	I0818 19:31:38.656180   43974 command_runner.go:130] > # cpushares = "0-1"
	I0818 19:31:38.656183   43974 command_runner.go:130] > # Where:
	I0818 19:31:38.656188   43974 command_runner.go:130] > # The workload name is workload-type.
	I0818 19:31:38.656194   43974 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0818 19:31:38.656199   43974 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0818 19:31:38.656204   43974 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0818 19:31:38.656212   43974 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0818 19:31:38.656217   43974 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0818 19:31:38.656221   43974 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0818 19:31:38.656227   43974 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0818 19:31:38.656231   43974 command_runner.go:130] > # Default value is set to true
	I0818 19:31:38.656235   43974 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0818 19:31:38.656240   43974 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0818 19:31:38.656244   43974 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0818 19:31:38.656248   43974 command_runner.go:130] > # Default value is set to 'false'
	I0818 19:31:38.656252   43974 command_runner.go:130] > # disable_hostport_mapping = false
	I0818 19:31:38.656260   43974 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0818 19:31:38.656264   43974 command_runner.go:130] > #
	I0818 19:31:38.656272   43974 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0818 19:31:38.656281   43974 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0818 19:31:38.656289   43974 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0818 19:31:38.656298   43974 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0818 19:31:38.656306   43974 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0818 19:31:38.656311   43974 command_runner.go:130] > [crio.image]
	I0818 19:31:38.656320   43974 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0818 19:31:38.656326   43974 command_runner.go:130] > # default_transport = "docker://"
	I0818 19:31:38.656336   43974 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0818 19:31:38.656348   43974 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0818 19:31:38.656356   43974 command_runner.go:130] > # global_auth_file = ""
	I0818 19:31:38.656362   43974 command_runner.go:130] > # The image used to instantiate infra containers.
	I0818 19:31:38.656372   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.656379   43974 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0818 19:31:38.656385   43974 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0818 19:31:38.656393   43974 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0818 19:31:38.656399   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.656403   43974 command_runner.go:130] > # pause_image_auth_file = ""
	I0818 19:31:38.656411   43974 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0818 19:31:38.656418   43974 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0818 19:31:38.656425   43974 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0818 19:31:38.656435   43974 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0818 19:31:38.656442   43974 command_runner.go:130] > # pause_command = "/pause"
	I0818 19:31:38.656448   43974 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0818 19:31:38.656457   43974 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0818 19:31:38.656467   43974 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0818 19:31:38.656476   43974 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0818 19:31:38.656484   43974 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0818 19:31:38.656491   43974 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0818 19:31:38.656497   43974 command_runner.go:130] > # pinned_images = [
	I0818 19:31:38.656500   43974 command_runner.go:130] > # ]
	I0818 19:31:38.656508   43974 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0818 19:31:38.656515   43974 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0818 19:31:38.656523   43974 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0818 19:31:38.656531   43974 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0818 19:31:38.656539   43974 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0818 19:31:38.656543   43974 command_runner.go:130] > # signature_policy = ""
	I0818 19:31:38.656550   43974 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0818 19:31:38.656556   43974 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0818 19:31:38.656564   43974 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0818 19:31:38.656570   43974 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0818 19:31:38.656578   43974 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0818 19:31:38.656582   43974 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0818 19:31:38.656590   43974 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0818 19:31:38.656597   43974 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0818 19:31:38.656603   43974 command_runner.go:130] > # changing them here.
	I0818 19:31:38.656607   43974 command_runner.go:130] > # insecure_registries = [
	I0818 19:31:38.656612   43974 command_runner.go:130] > # ]
	I0818 19:31:38.656618   43974 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0818 19:31:38.656625   43974 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0818 19:31:38.656630   43974 command_runner.go:130] > # image_volumes = "mkdir"
	I0818 19:31:38.656637   43974 command_runner.go:130] > # Temporary directory to use for storing big files
	I0818 19:31:38.656641   43974 command_runner.go:130] > # big_files_temporary_dir = ""
	I0818 19:31:38.656647   43974 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0818 19:31:38.656653   43974 command_runner.go:130] > # CNI plugins.
	I0818 19:31:38.656657   43974 command_runner.go:130] > [crio.network]
	I0818 19:31:38.656664   43974 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0818 19:31:38.656669   43974 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0818 19:31:38.656675   43974 command_runner.go:130] > # cni_default_network = ""
	I0818 19:31:38.656681   43974 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0818 19:31:38.656688   43974 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0818 19:31:38.656694   43974 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0818 19:31:38.656702   43974 command_runner.go:130] > # plugin_dirs = [
	I0818 19:31:38.656709   43974 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0818 19:31:38.656713   43974 command_runner.go:130] > # ]
	I0818 19:31:38.656721   43974 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0818 19:31:38.656726   43974 command_runner.go:130] > [crio.metrics]
	I0818 19:31:38.656731   43974 command_runner.go:130] > # Globally enable or disable metrics support.
	I0818 19:31:38.656737   43974 command_runner.go:130] > enable_metrics = true
	I0818 19:31:38.656742   43974 command_runner.go:130] > # Specify enabled metrics collectors.
	I0818 19:31:38.656749   43974 command_runner.go:130] > # Per default all metrics are enabled.
	I0818 19:31:38.656754   43974 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0818 19:31:38.656762   43974 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0818 19:31:38.656770   43974 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0818 19:31:38.656774   43974 command_runner.go:130] > # metrics_collectors = [
	I0818 19:31:38.656781   43974 command_runner.go:130] > # 	"operations",
	I0818 19:31:38.656787   43974 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0818 19:31:38.656795   43974 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0818 19:31:38.656801   43974 command_runner.go:130] > # 	"operations_errors",
	I0818 19:31:38.656805   43974 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0818 19:31:38.656812   43974 command_runner.go:130] > # 	"image_pulls_by_name",
	I0818 19:31:38.656816   43974 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0818 19:31:38.656822   43974 command_runner.go:130] > # 	"image_pulls_failures",
	I0818 19:31:38.656827   43974 command_runner.go:130] > # 	"image_pulls_successes",
	I0818 19:31:38.656833   43974 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0818 19:31:38.656837   43974 command_runner.go:130] > # 	"image_layer_reuse",
	I0818 19:31:38.656841   43974 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0818 19:31:38.656848   43974 command_runner.go:130] > # 	"containers_oom_total",
	I0818 19:31:38.656852   43974 command_runner.go:130] > # 	"containers_oom",
	I0818 19:31:38.656858   43974 command_runner.go:130] > # 	"processes_defunct",
	I0818 19:31:38.656861   43974 command_runner.go:130] > # 	"operations_total",
	I0818 19:31:38.656868   43974 command_runner.go:130] > # 	"operations_latency_seconds",
	I0818 19:31:38.656874   43974 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0818 19:31:38.656880   43974 command_runner.go:130] > # 	"operations_errors_total",
	I0818 19:31:38.656884   43974 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0818 19:31:38.656890   43974 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0818 19:31:38.656895   43974 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0818 19:31:38.656902   43974 command_runner.go:130] > # 	"image_pulls_success_total",
	I0818 19:31:38.656906   43974 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0818 19:31:38.656913   43974 command_runner.go:130] > # 	"containers_oom_count_total",
	I0818 19:31:38.656918   43974 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0818 19:31:38.656924   43974 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0818 19:31:38.656927   43974 command_runner.go:130] > # ]
	I0818 19:31:38.656932   43974 command_runner.go:130] > # The port on which the metrics server will listen.
	I0818 19:31:38.656938   43974 command_runner.go:130] > # metrics_port = 9090
	I0818 19:31:38.656943   43974 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0818 19:31:38.656949   43974 command_runner.go:130] > # metrics_socket = ""
	I0818 19:31:38.656954   43974 command_runner.go:130] > # The certificate for the secure metrics server.
	I0818 19:31:38.656962   43974 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0818 19:31:38.656968   43974 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0818 19:31:38.656975   43974 command_runner.go:130] > # certificate on any modification event.
	I0818 19:31:38.656979   43974 command_runner.go:130] > # metrics_cert = ""
	I0818 19:31:38.656986   43974 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0818 19:31:38.656991   43974 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0818 19:31:38.656997   43974 command_runner.go:130] > # metrics_key = ""
	I0818 19:31:38.657003   43974 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0818 19:31:38.657009   43974 command_runner.go:130] > [crio.tracing]
	I0818 19:31:38.657014   43974 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0818 19:31:38.657019   43974 command_runner.go:130] > # enable_tracing = false
	I0818 19:31:38.657024   43974 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0818 19:31:38.657031   43974 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0818 19:31:38.657037   43974 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0818 19:31:38.657043   43974 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0818 19:31:38.657048   43974 command_runner.go:130] > # CRI-O NRI configuration.
	I0818 19:31:38.657055   43974 command_runner.go:130] > [crio.nri]
	I0818 19:31:38.657059   43974 command_runner.go:130] > # Globally enable or disable NRI.
	I0818 19:31:38.657064   43974 command_runner.go:130] > # enable_nri = false
	I0818 19:31:38.657068   43974 command_runner.go:130] > # NRI socket to listen on.
	I0818 19:31:38.657073   43974 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0818 19:31:38.657077   43974 command_runner.go:130] > # NRI plugin directory to use.
	I0818 19:31:38.657084   43974 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0818 19:31:38.657089   43974 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0818 19:31:38.657096   43974 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0818 19:31:38.657101   43974 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0818 19:31:38.657108   43974 command_runner.go:130] > # nri_disable_connections = false
	I0818 19:31:38.657113   43974 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0818 19:31:38.657120   43974 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0818 19:31:38.657127   43974 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0818 19:31:38.657134   43974 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0818 19:31:38.657140   43974 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0818 19:31:38.657146   43974 command_runner.go:130] > [crio.stats]
	I0818 19:31:38.657151   43974 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0818 19:31:38.657159   43974 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0818 19:31:38.657163   43974 command_runner.go:130] > # stats_collection_period = 0
	I0818 19:31:38.657273   43974 cni.go:84] Creating CNI manager for ""
	I0818 19:31:38.657288   43974 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 19:31:38.657297   43974 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 19:31:38.657331   43974 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-048993 NodeName:multinode-048993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 19:31:38.657485   43974 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-048993"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 19:31:38.657547   43974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 19:31:38.668623   43974 command_runner.go:130] > kubeadm
	I0818 19:31:38.668641   43974 command_runner.go:130] > kubectl
	I0818 19:31:38.668649   43974 command_runner.go:130] > kubelet
	I0818 19:31:38.668739   43974 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 19:31:38.668806   43974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 19:31:38.678682   43974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0818 19:31:38.696241   43974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 19:31:38.713731   43974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0818 19:31:38.731612   43974 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0818 19:31:38.735631   43974 command_runner.go:130] > 192.168.39.185	control-plane.minikube.internal
	I0818 19:31:38.735703   43974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:31:38.873782   43974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:31:38.888741   43974 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993 for IP: 192.168.39.185
	I0818 19:31:38.888769   43974 certs.go:194] generating shared ca certs ...
	I0818 19:31:38.888795   43974 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:31:38.888987   43974 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 19:31:38.889032   43974 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 19:31:38.889042   43974 certs.go:256] generating profile certs ...
	I0818 19:31:38.889119   43974 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/client.key
	I0818 19:31:38.889174   43974 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.key.9dd43d17
	I0818 19:31:38.889214   43974 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.key
	I0818 19:31:38.889225   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 19:31:38.889236   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 19:31:38.889249   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 19:31:38.889261   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 19:31:38.889277   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 19:31:38.889297   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 19:31:38.889316   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 19:31:38.889334   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 19:31:38.889403   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 19:31:38.889434   43974 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 19:31:38.889443   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 19:31:38.889472   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 19:31:38.889501   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 19:31:38.889526   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 19:31:38.889562   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:31:38.889588   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:38.889601   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 19:31:38.889614   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 19:31:38.890202   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 19:31:38.915097   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 19:31:38.939476   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 19:31:38.963251   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 19:31:38.986443   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 19:31:39.011286   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 19:31:39.036771   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 19:31:39.060748   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 19:31:39.084324   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 19:31:39.107771   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 19:31:39.132562   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 19:31:39.156167   43974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 19:31:39.172870   43974 ssh_runner.go:195] Run: openssl version
	I0818 19:31:39.178672   43974 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0818 19:31:39.178746   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 19:31:39.189134   43974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:39.193788   43974 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:39.193849   43974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:39.193892   43974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:39.199526   43974 command_runner.go:130] > b5213941
	I0818 19:31:39.199569   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 19:31:39.208825   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 19:31:39.219688   43974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 19:31:39.224199   43974 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:31:39.224225   43974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:31:39.224259   43974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 19:31:39.230088   43974 command_runner.go:130] > 51391683
	I0818 19:31:39.230134   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 19:31:39.239929   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 19:31:39.251315   43974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 19:31:39.256111   43974 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:31:39.256334   43974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:31:39.256381   43974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 19:31:39.262349   43974 command_runner.go:130] > 3ec20f2e
	I0818 19:31:39.262421   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 19:31:39.272220   43974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:31:39.276871   43974 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:31:39.276892   43974 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0818 19:31:39.276898   43974 command_runner.go:130] > Device: 253,1	Inode: 532758      Links: 1
	I0818 19:31:39.276906   43974 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0818 19:31:39.276914   43974 command_runner.go:130] > Access: 2024-08-18 19:24:49.477711638 +0000
	I0818 19:31:39.276919   43974 command_runner.go:130] > Modify: 2024-08-18 19:24:49.477711638 +0000
	I0818 19:31:39.276924   43974 command_runner.go:130] > Change: 2024-08-18 19:24:49.477711638 +0000
	I0818 19:31:39.276929   43974 command_runner.go:130] >  Birth: 2024-08-18 19:24:49.477711638 +0000
	I0818 19:31:39.277001   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 19:31:39.282835   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.282901   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 19:31:39.290054   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.290122   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 19:31:39.295749   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.295812   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 19:31:39.301238   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.301300   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 19:31:39.306734   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.306797   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 19:31:39.312203   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.312276   43974 kubeadm.go:392] StartCluster: {Name:multinode-048993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-048993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:31:39.312413   43974 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 19:31:39.312471   43974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 19:31:39.353401   43974 command_runner.go:130] > 45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add
	I0818 19:31:39.353435   43974 command_runner.go:130] > 034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970
	I0818 19:31:39.353446   43974 command_runner.go:130] > ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09
	I0818 19:31:39.353456   43974 command_runner.go:130] > e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa
	I0818 19:31:39.353465   43974 command_runner.go:130] > 90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5
	I0818 19:31:39.353473   43974 command_runner.go:130] > eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121
	I0818 19:31:39.353482   43974 command_runner.go:130] > e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3
	I0818 19:31:39.353492   43974 command_runner.go:130] > a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56
	I0818 19:31:39.353523   43974 cri.go:89] found id: "45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add"
	I0818 19:31:39.353534   43974 cri.go:89] found id: "034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970"
	I0818 19:31:39.353540   43974 cri.go:89] found id: "ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09"
	I0818 19:31:39.353544   43974 cri.go:89] found id: "e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa"
	I0818 19:31:39.353549   43974 cri.go:89] found id: "90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5"
	I0818 19:31:39.353553   43974 cri.go:89] found id: "eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121"
	I0818 19:31:39.353558   43974 cri.go:89] found id: "e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3"
	I0818 19:31:39.353562   43974 cri.go:89] found id: "a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56"
	I0818 19:31:39.353566   43974 cri.go:89] found id: ""
	I0818 19:31:39.353605   43974 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.655553377Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-7frzh,Uid:9a575e7c-5ef9-468b-a917-ecdb76b22c63,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009538731714441,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:31:44.591804214Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-6sbml,Uid:c64b53bf-6c95-4f8b-abee-12a73b557ab9,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1724009504976513272,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:31:44.591809298Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&PodSandboxMetadata{Name:kindnet-x4z7j,Uid:1a272cb2-280a-42cb-a0b3-9c4292d1db39,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009504943416063,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-08-18T19:31:44.591810877Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&PodSandboxMetadata{Name:kube-proxy-28dj6,Uid:d2949b15-f781-4283-a78e-190a50e61487,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009504932784805,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:31:44.591817431Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8c0b6bd1-9414-41ca-92a5-8737a3071582,Namespace:kube-system,Attempt:1,},State
:SANDBOX_READY,CreatedAt:1724009504916915002,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-18T19:31:44.591814534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-048993,Uid:251d787115b7540ccdaca898e5c46a2b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009501059749972,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: 251d787115b7540ccdaca898e5c46a2b,kubernetes.io/config.seen: 2024-08-18T19:31:40.585996993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:77b81c5eb3b0d5d8f340754f580aeeb62b
19122d5d7e2fd3ec3ae516203e09a9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-048993,Uid:0bc4e515b3bcad171c5a2bf56de43ea6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009501058978921,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0bc4e515b3bcad171c5a2bf56de43ea6,kubernetes.io/config.seen: 2024-08-18T19:31:40.585998875Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-048993,Uid:05881ddcb619c86507c6e41c4b1fd421,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009501056975485,Labels:map[string]string{component: kube-controller-manager,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 05881ddcb619c86507c6e41c4b1fd421,kubernetes.io/config.seen: 2024-08-18T19:31:40.585998061Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&PodSandboxMetadata{Name:etcd-multinode-048993,Uid:679b5c8b5600f8bdcf4b592e6a912dc9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009501056086419,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.185:2379,kuberne
tes.io/config.hash: 679b5c8b5600f8bdcf4b592e6a912dc9,kubernetes.io/config.seen: 2024-08-18T19:31:40.585993451Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-7frzh,Uid:9a575e7c-5ef9-468b-a917-ecdb76b22c63,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009171945036683,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:26:11.628736617Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8c0b6bd1-9414-41ca-92a5-8737a3071582,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1724009119133369436,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-18T19:25:18.821607172Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-6sbml,Uid:c64b53bf-6c95-4f8b-abee-12a73b557ab9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009119123600035,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:25:18.816743248Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&PodSandboxMetadata{Name:kube-proxy-28dj6,Uid:d2949b15-f781-4283-a78e-190a50e61487,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009104263528900,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:25:03.337855732Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&PodSandboxMetadata{Name:kindnet-x4z7j,Uid:1a272cb2-280a-42cb-a0b3-9c4292d1db39,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009103652594437,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:25:03.344872060Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-048993,Uid:251d787115b7540ccdaca898e5c46a2b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009092666004163,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: 251d787115b7540ccdaca898e5c46a2b,kubernetes.io/config.seen: 2024-08-18T19:24:52.187218372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:71dbd12c99bf66
408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-048993,Uid:0bc4e515b3bcad171c5a2bf56de43ea6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009092662553075,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0bc4e515b3bcad171c5a2bf56de43ea6,kubernetes.io/config.seen: 2024-08-18T19:24:52.187220659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-048993,Uid:05881ddcb619c86507c6e41c4b1fd421,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009092645189191,Labels:map[string]string{component: kube-c
ontroller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 05881ddcb619c86507c6e41c4b1fd421,kubernetes.io/config.seen: 2024-08-18T19:24:52.187219753Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&PodSandboxMetadata{Name:etcd-multinode-048993,Uid:679b5c8b5600f8bdcf4b592e6a912dc9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009092639745008,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.185:2379,kubernetes.io/config.hash: 679b5c8b5600f8bdcf4b592e6a912dc9,kubernetes.io/config.seen: 2024-08-18T19:24:52.187211601Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3b7399df-6710-4893-8cfd-0b8bae7c6c0e name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.656913564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1e13ac8-8355-4138-9b0a-bcb80cef1a5b name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.657014717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1e13ac8-8355-4138-9b0a-bcb80cef1a5b name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.657539378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc87dcd9aa9deddc7562fc238cbe336930df16f77335aec78802fd36fcf4f2c0,PodSandboxId:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724009538868344760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0,PodSandboxId:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724009505368994646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7,PodSandboxId:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724009505232833739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdcc7ac806c263132a374548fa08b957407b214c4c3e64e19f92a95f40533d,PodSandboxId:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724009505167584804,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6,PodSandboxId:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724009505139302914,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a,PodSandboxId:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724009501300203179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc,PodSandboxId:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724009501319068230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145,PodSandboxId:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724009501245805614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c,PodSandboxId:77b81c5eb3b0d5d8f340754f580aeeb62b19122d5d7e2fd3ec3ae516203e09a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724009501229041362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5316c792a79b718866fc191168399e5fb13a26819e2b646e0cf0f6b6557a6d62,PodSandboxId:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724009174507967338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add,PodSandboxId:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724009119340101142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970,PodSandboxId:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724009119281310809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09,PodSandboxId:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724009107657715318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa,PodSandboxId:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724009104355534243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5,PodSandboxId:71dbd12c99bf66408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724009092877705381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121,PodSandboxId:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724009092869904469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3,PodSandboxId:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724009092840215405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56,PodSandboxId:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724009092826804549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1e13ac8-8355-4138-9b0a-bcb80cef1a5b name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.663538982Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9d298ed-8450-4f88-a22e-5da2c09489d6 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.663769175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9d298ed-8450-4f88-a22e-5da2c09489d6 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.665457286Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b16d5e68-bccd-49e0-8fc7-f4f6c0ef2f40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.666101472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009601666071503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b16d5e68-bccd-49e0-8fc7-f4f6c0ef2f40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.667084535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f387522c-5691-4f25-81ab-74382fb4e865 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.667418768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f387522c-5691-4f25-81ab-74382fb4e865 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.667809168Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc87dcd9aa9deddc7562fc238cbe336930df16f77335aec78802fd36fcf4f2c0,PodSandboxId:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724009538868344760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0,PodSandboxId:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724009505368994646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7,PodSandboxId:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724009505232833739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdcc7ac806c263132a374548fa08b957407b214c4c3e64e19f92a95f40533d,PodSandboxId:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724009505167584804,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6,PodSandboxId:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724009505139302914,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a,PodSandboxId:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724009501300203179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc,PodSandboxId:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724009501319068230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145,PodSandboxId:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724009501245805614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c,PodSandboxId:77b81c5eb3b0d5d8f340754f580aeeb62b19122d5d7e2fd3ec3ae516203e09a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724009501229041362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5316c792a79b718866fc191168399e5fb13a26819e2b646e0cf0f6b6557a6d62,PodSandboxId:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724009174507967338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add,PodSandboxId:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724009119340101142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970,PodSandboxId:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724009119281310809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09,PodSandboxId:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724009107657715318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa,PodSandboxId:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724009104355534243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5,PodSandboxId:71dbd12c99bf66408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724009092877705381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121,PodSandboxId:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724009092869904469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3,PodSandboxId:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724009092840215405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56,PodSandboxId:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724009092826804549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f387522c-5691-4f25-81ab-74382fb4e865 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.715027797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cbeead83-8341-4567-9251-b255e7a906e9 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.715233922Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cbeead83-8341-4567-9251-b255e7a906e9 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.722879628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=627f51e4-04dc-404b-8ffb-03eee918ad8d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.723487445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009601723461590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=627f51e4-04dc-404b-8ffb-03eee918ad8d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.724126911Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af7dc6ac-0bbe-4c8b-adb2-02b22f451e4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.724265122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af7dc6ac-0bbe-4c8b-adb2-02b22f451e4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.724600664Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc87dcd9aa9deddc7562fc238cbe336930df16f77335aec78802fd36fcf4f2c0,PodSandboxId:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724009538868344760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0,PodSandboxId:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724009505368994646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7,PodSandboxId:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724009505232833739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdcc7ac806c263132a374548fa08b957407b214c4c3e64e19f92a95f40533d,PodSandboxId:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724009505167584804,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6,PodSandboxId:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724009505139302914,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a,PodSandboxId:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724009501300203179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc,PodSandboxId:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724009501319068230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145,PodSandboxId:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724009501245805614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c,PodSandboxId:77b81c5eb3b0d5d8f340754f580aeeb62b19122d5d7e2fd3ec3ae516203e09a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724009501229041362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5316c792a79b718866fc191168399e5fb13a26819e2b646e0cf0f6b6557a6d62,PodSandboxId:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724009174507967338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add,PodSandboxId:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724009119340101142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970,PodSandboxId:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724009119281310809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09,PodSandboxId:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724009107657715318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa,PodSandboxId:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724009104355534243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5,PodSandboxId:71dbd12c99bf66408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724009092877705381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121,PodSandboxId:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724009092869904469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3,PodSandboxId:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724009092840215405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56,PodSandboxId:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724009092826804549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af7dc6ac-0bbe-4c8b-adb2-02b22f451e4f name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.774408958Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87e81a15-e67d-4330-a98f-6b5f65fa2b6c name=/runtime.v1.RuntimeService/Version
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.774480417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87e81a15-e67d-4330-a98f-6b5f65fa2b6c name=/runtime.v1.RuntimeService/Version
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.776101421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c266e113-badc-4a5b-8511-e4af1e57d71f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.776583407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009601776561077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c266e113-badc-4a5b-8511-e4af1e57d71f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.777198957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4fc37f2f-0fa2-441b-80ff-6f9a41c412a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.777256479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fc37f2f-0fa2-441b-80ff-6f9a41c412a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:33:21 multinode-048993 crio[2754]: time="2024-08-18 19:33:21.777607804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc87dcd9aa9deddc7562fc238cbe336930df16f77335aec78802fd36fcf4f2c0,PodSandboxId:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724009538868344760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0,PodSandboxId:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724009505368994646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7,PodSandboxId:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724009505232833739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdcc7ac806c263132a374548fa08b957407b214c4c3e64e19f92a95f40533d,PodSandboxId:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724009505167584804,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6,PodSandboxId:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724009505139302914,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a,PodSandboxId:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724009501300203179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc,PodSandboxId:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724009501319068230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145,PodSandboxId:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724009501245805614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c,PodSandboxId:77b81c5eb3b0d5d8f340754f580aeeb62b19122d5d7e2fd3ec3ae516203e09a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724009501229041362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5316c792a79b718866fc191168399e5fb13a26819e2b646e0cf0f6b6557a6d62,PodSandboxId:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724009174507967338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add,PodSandboxId:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724009119340101142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970,PodSandboxId:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724009119281310809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09,PodSandboxId:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724009107657715318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa,PodSandboxId:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724009104355534243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5,PodSandboxId:71dbd12c99bf66408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724009092877705381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121,PodSandboxId:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724009092869904469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3,PodSandboxId:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724009092840215405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56,PodSandboxId:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724009092826804549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fc37f2f-0fa2-441b-80ff-6f9a41c412a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	cc87dcd9aa9de       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   92b4b30080a97       busybox-7dff88458-7frzh
	e5524d7007d0d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   858d857765a33       kindnet-x4z7j
	4e11f83ab3834       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   200c9423cba15       coredns-6f6b679f8f-6sbml
	c9bdcc7ac806c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   660596f961b87       storage-provisioner
	eb6dcb819816d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   7518b32a59a94       kube-proxy-28dj6
	2af5b668c0fcb       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   819c70ddf881b       kube-apiserver-multinode-048993
	9d6e54ff4050c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   ae5a99e5e89a0       etcd-multinode-048993
	e4d8d775d3b05       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   de5d60540061c       kube-controller-manager-multinode-048993
	fdebc266be304       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   77b81c5eb3b0d       kube-scheduler-multinode-048993
	5316c792a79b7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   feee816b0242b       busybox-7dff88458-7frzh
	45a61d7ee2e26       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   17994e6d5a09a       coredns-6f6b679f8f-6sbml
	034596ea64fd3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   a514572853159       storage-provisioner
	ad82c36028623       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   78e151ba3ed37       kindnet-x4z7j
	e5faa6d0a7631       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   b64158a3a5642       kube-proxy-28dj6
	90d31f58d95ae       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   71dbd12c99bf6       kube-scheduler-multinode-048993
	eec00e2c5e7eb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   f83c57db372e7       etcd-multinode-048993
	e1d4611a4a993       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   5e2bf7189a6d7       kube-apiserver-multinode-048993
	a55d4b9fa2536       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   888e9f9ce70d4       kube-controller-manager-multinode-048993
	
	
	==> coredns [45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add] <==
	[INFO] 10.244.1.2:55890 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001763973s
	[INFO] 10.244.1.2:44367 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008542s
	[INFO] 10.244.1.2:46850 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153428s
	[INFO] 10.244.1.2:48940 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001346329s
	[INFO] 10.244.1.2:37702 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086624s
	[INFO] 10.244.1.2:55482 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073943s
	[INFO] 10.244.1.2:48162 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107614s
	[INFO] 10.244.0.3:51710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123557s
	[INFO] 10.244.0.3:36847 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062581s
	[INFO] 10.244.0.3:46175 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079334s
	[INFO] 10.244.0.3:47441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007288s
	[INFO] 10.244.1.2:53518 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015543s
	[INFO] 10.244.1.2:50528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105074s
	[INFO] 10.244.1.2:55912 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006123s
	[INFO] 10.244.1.2:58978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080123s
	[INFO] 10.244.0.3:37306 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013604s
	[INFO] 10.244.0.3:50941 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014053s
	[INFO] 10.244.0.3:41496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088488s
	[INFO] 10.244.0.3:56717 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011318s
	[INFO] 10.244.1.2:54243 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150812s
	[INFO] 10.244.1.2:42566 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111402s
	[INFO] 10.244.1.2:55877 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064247s
	[INFO] 10.244.1.2:59078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000062466s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42879 - 37966 "HINFO IN 3929632951270858664.6796358618220062419. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009777619s
	
	
	==> describe nodes <==
	Name:               multinode-048993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-048993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=multinode-048993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T19_24_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:24:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-048993
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:33:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:31:44 +0000   Sun, 18 Aug 2024 19:24:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:31:44 +0000   Sun, 18 Aug 2024 19:24:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:31:44 +0000   Sun, 18 Aug 2024 19:24:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:31:44 +0000   Sun, 18 Aug 2024 19:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    multinode-048993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dddb9e3f1ed4476a8ed6af277f7d2c4f
	  System UUID:                dddb9e3f-1ed4-476a-8ed6-af277f7d2c4f
	  Boot ID:                    1c5c5224-be60-4cf5-8851-63b45bb308bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7frzh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 coredns-6f6b679f8f-6sbml                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m19s
	  kube-system                 etcd-multinode-048993                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m24s
	  kube-system                 kindnet-x4z7j                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m19s
	  kube-system                 kube-apiserver-multinode-048993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-controller-manager-multinode-048993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-proxy-28dj6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kube-system                 kube-scheduler-multinode-048993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m17s                kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m24s                kubelet          Node multinode-048993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m24s                kubelet          Node multinode-048993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m24s                kubelet          Node multinode-048993 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m24s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m20s                node-controller  Node multinode-048993 event: Registered Node multinode-048993 in Controller
	  Normal  NodeReady                8m4s                 kubelet          Node multinode-048993 status is now: NodeReady
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node multinode-048993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node multinode-048993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node multinode-048993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                  node-controller  Node multinode-048993 event: Registered Node multinode-048993 in Controller
	
	
	Name:               multinode-048993-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-048993-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=multinode-048993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T19_32_21_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:32:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-048993-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:33:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:32:51 +0000   Sun, 18 Aug 2024 19:32:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:32:51 +0000   Sun, 18 Aug 2024 19:32:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:32:51 +0000   Sun, 18 Aug 2024 19:32:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:32:51 +0000   Sun, 18 Aug 2024 19:32:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    multinode-048993-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c34bd67fb2054565bffc7efd4e32bba6
	  System UUID:                c34bd67f-b205-4565-bffc-7efd4e32bba6
	  Boot ID:                    c4f23b01-b2a4-4e59-93a6-814d6593da13
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4d24z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-gprqg              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m33s
	  kube-system                 kube-proxy-mvc7l           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m28s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m33s (x2 over 7m34s)  kubelet     Node multinode-048993-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s (x2 over 7m34s)  kubelet     Node multinode-048993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m33s (x2 over 7m34s)  kubelet     Node multinode-048993-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m13s                  kubelet     Node multinode-048993-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s (x2 over 62s)      kubelet     Node multinode-048993-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 62s)      kubelet     Node multinode-048993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 62s)      kubelet     Node multinode-048993-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                42s                    kubelet     Node multinode-048993-m02 status is now: NodeReady
	
	
	Name:               multinode-048993-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-048993-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=multinode-048993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T19_33_00_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:33:00 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-048993-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:33:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:33:18 +0000   Sun, 18 Aug 2024 19:33:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:33:18 +0000   Sun, 18 Aug 2024 19:33:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:33:18 +0000   Sun, 18 Aug 2024 19:33:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:33:18 +0000   Sun, 18 Aug 2024 19:33:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    multinode-048993-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c3565f9dd1a3455aa03b2496d1c91def
	  System UUID:                c3565f9d-d1a3-455a-a03b-2496d1c91def
	  Boot ID:                    45f48ea1-8a2a-4c2e-ab34-fa7200478a56
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dg95p       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m39s
	  kube-system                 kube-proxy-2kq2l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m34s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m44s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m39s (x2 over 6m40s)  kubelet     Node multinode-048993-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x2 over 6m40s)  kubelet     Node multinode-048993-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x2 over 6m40s)  kubelet     Node multinode-048993-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m19s                  kubelet     Node multinode-048993-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet     Node multinode-048993-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet     Node multinode-048993-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet     Node multinode-048993-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m49s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m30s                  kubelet     Node multinode-048993-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22s (x2 over 23s)      kubelet     Node multinode-048993-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 23s)      kubelet     Node multinode-048993-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 23s)      kubelet     Node multinode-048993-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4s                     kubelet     Node multinode-048993-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055516] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.171897] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.144371] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.303137] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.065184] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.593553] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.063740] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.996975] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.071088] kauditd_printk_skb: 69 callbacks suppressed
	[Aug18 19:25] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +1.198213] kauditd_printk_skb: 43 callbacks suppressed
	[ +15.491035] kauditd_printk_skb: 38 callbacks suppressed
	[Aug18 19:26] kauditd_printk_skb: 12 callbacks suppressed
	[Aug18 19:31] systemd-fstab-generator[2671]: Ignoring "noauto" option for root device
	[  +0.150910] systemd-fstab-generator[2683]: Ignoring "noauto" option for root device
	[  +0.176988] systemd-fstab-generator[2697]: Ignoring "noauto" option for root device
	[  +0.150127] systemd-fstab-generator[2710]: Ignoring "noauto" option for root device
	[  +0.260152] systemd-fstab-generator[2738]: Ignoring "noauto" option for root device
	[  +8.138798] systemd-fstab-generator[2839]: Ignoring "noauto" option for root device
	[  +0.082421] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.521406] systemd-fstab-generator[2959]: Ignoring "noauto" option for root device
	[  +4.663687] kauditd_printk_skb: 74 callbacks suppressed
	[  +8.319086] kauditd_printk_skb: 34 callbacks suppressed
	[  +3.071025] systemd-fstab-generator[3755]: Ignoring "noauto" option for root device
	[Aug18 19:32] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a] <==
	{"level":"info","ts":"2024-08-18T19:31:41.766429Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:31:41.766482Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:31:41.764561Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:31:41.785063Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-08-18T19:31:41.787186Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-08-18T19:31:41.787238Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-18T19:31:41.792608Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8fbc2df34e14192d","initial-advertise-peer-urls":["https://192.168.39.185:2380"],"listen-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.185:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-18T19:31:41.792685Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T19:31:43.183219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-18T19:31:43.183334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-18T19:31:43.183385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-08-18T19:31:43.183429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:31:43.183453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-08-18T19:31:43.183480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 3"}
	{"level":"info","ts":"2024-08-18T19:31:43.183506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-08-18T19:31:43.188766Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:multinode-048993 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T19:31:43.188871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:31:43.188945Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T19:31:43.188988Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T19:31:43.189005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:31:43.190244Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:31:43.191255Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T19:31:43.190312Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:31:43.192379Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-08-18T19:33:03.268773Z","caller":"traceutil/trace.go:171","msg":"trace[811109797] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"164.047609ms","start":"2024-08-18T19:33:03.104682Z","end":"2024-08-18T19:33:03.268730Z","steps":["trace[811109797] 'process raft request'  (duration: 163.629939ms)"],"step_count":1}
	
	
	==> etcd [eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121] <==
	{"level":"info","ts":"2024-08-18T19:24:53.828288Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-18T19:25:49.098014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.573628ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814266096251320458 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-048993-m02.17ece92fca2ea282\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-048993-m02.17ece92fca2ea282\" value_size:646 lease:1814266096251319858 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-18T19:25:49.098215Z","caller":"traceutil/trace.go:171","msg":"trace[2076543562] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"229.420574ms","start":"2024-08-18T19:25:48.868781Z","end":"2024-08-18T19:25:49.098202Z","steps":["trace[2076543562] 'process raft request'  (duration: 69.164798ms)","trace[2076543562] 'compare'  (duration: 159.486895ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:25:54.204057Z","caller":"traceutil/trace.go:171","msg":"trace[1090596798] linearizableReadLoop","detail":"{readStateIndex:498; appliedIndex:497; }","duration":"148.49999ms","start":"2024-08-18T19:25:54.055542Z","end":"2024-08-18T19:25:54.204042Z","steps":["trace[1090596798] 'read index received'  (duration: 86.707504ms)","trace[1090596798] 'applied index is now lower than readState.Index'  (duration: 61.791606ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:25:54.204232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.690093ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-048993-m02\" ","response":"range_response_count:1 size:2886"}
	{"level":"info","ts":"2024-08-18T19:25:54.204255Z","caller":"traceutil/trace.go:171","msg":"trace[1149031216] range","detail":"{range_begin:/registry/minions/multinode-048993-m02; range_end:; response_count:1; response_revision:478; }","duration":"148.732415ms","start":"2024-08-18T19:25:54.055516Z","end":"2024-08-18T19:25:54.204248Z","steps":["trace[1149031216] 'agreement among raft nodes before linearized reading'  (duration: 148.599311ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:25:54.204422Z","caller":"traceutil/trace.go:171","msg":"trace[1153716909] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"278.404096ms","start":"2024-08-18T19:25:53.926008Z","end":"2024-08-18T19:25:54.204412Z","steps":["trace[1153716909] 'process raft request'  (duration: 216.33591ms)","trace[1153716909] 'compare'  (duration: 61.491141ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:25:54.637542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.350604ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814266096251320547 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:459 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-18T19:25:54.637713Z","caller":"traceutil/trace.go:171","msg":"trace[1477434445] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"426.862549ms","start":"2024-08-18T19:25:54.210838Z","end":"2024-08-18T19:25:54.637701Z","steps":["trace[1477434445] 'process raft request'  (duration: 198.751703ms)","trace[1477434445] 'compare'  (duration: 227.20758ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:25:54.637785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:25:54.210822Z","time spent":"426.930462ms","remote":"127.0.0.1:55684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:459 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2024-08-18T19:25:54.637896Z","caller":"traceutil/trace.go:171","msg":"trace[391627134] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"367.976841ms","start":"2024-08-18T19:25:54.269908Z","end":"2024-08-18T19:25:54.637884Z","steps":["trace[391627134] 'read index received'  (duration: 139.647608ms)","trace[391627134] 'applied index is now lower than readState.Index'  (duration: 228.328115ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:25:54.637994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.080322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T19:25:54.638030Z","caller":"traceutil/trace.go:171","msg":"trace[470266829] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:479; }","duration":"368.122542ms","start":"2024-08-18T19:25:54.269902Z","end":"2024-08-18T19:25:54.638025Z","steps":["trace[470266829] 'agreement among raft nodes before linearized reading'  (duration: 368.062252ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:25:54.638065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:25:54.269870Z","time spent":"368.189971ms","remote":"127.0.0.1:55154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-08-18T19:26:43.084535Z","caller":"traceutil/trace.go:171","msg":"trace[541571492] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"202.155449ms","start":"2024-08-18T19:26:42.882341Z","end":"2024-08-18T19:26:43.084497Z","steps":["trace[541571492] 'process raft request'  (duration: 125.00485ms)","trace[541571492] 'compare'  (duration: 77.020137ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:29:58.687091Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-18T19:29:58.688595Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-048993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	{"level":"warn","ts":"2024-08-18T19:29:58.688696Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:29:58.688813Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:29:58.740982Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:29:58.741312Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:29:58.743225Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8fbc2df34e14192d","current-leader-member-id":"8fbc2df34e14192d"}
	{"level":"info","ts":"2024-08-18T19:29:58.748425Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-08-18T19:29:58.748573Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-08-18T19:29:58.748605Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-048993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	
	
	==> kernel <==
	 19:33:22 up 9 min,  0 users,  load average: 0.37, 0.28, 0.13
	Linux multinode-048993 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09] <==
	I0818 19:29:08.708522       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:18.707426       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:29:18.707489       1 main.go:299] handling current node
	I0818 19:29:18.707509       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:29:18.707516       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:29:18.707686       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:29:18.707719       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:28.710002       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:29:28.710054       1 main.go:299] handling current node
	I0818 19:29:28.710070       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:29:28.710076       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:29:28.710248       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:29:28.710273       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:38.715120       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:29:38.715196       1 main.go:299] handling current node
	I0818 19:29:38.715213       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:29:38.715219       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:29:38.715387       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:29:38.715421       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:48.715424       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:29:48.715467       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:48.715639       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:29:48.715665       1 main.go:299] handling current node
	I0818 19:29:48.715677       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:29:48.715682       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0] <==
	I0818 19:32:36.425932       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:32:46.423430       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:32:46.423562       1 main.go:299] handling current node
	I0818 19:32:46.423592       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:32:46.423610       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:32:46.423752       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:32:46.423775       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:32:56.422828       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:32:56.422979       1 main.go:299] handling current node
	I0818 19:32:56.423018       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:32:56.423079       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:32:56.423329       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:32:56.423391       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:33:06.425684       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:33:06.425843       1 main.go:299] handling current node
	I0818 19:33:06.425928       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:33:06.425974       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:33:06.426291       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:33:06.426334       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.2.0/24] 
	I0818 19:33:16.427978       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:33:16.428105       1 main.go:299] handling current node
	I0818 19:33:16.428191       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:33:16.428236       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:33:16.428404       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:33:16.428453       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc] <==
	I0818 19:31:44.548528       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 19:31:44.549356       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 19:31:44.549519       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0818 19:31:44.549548       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 19:31:44.549779       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0818 19:31:44.558795       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 19:31:44.561244       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 19:31:44.563051       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 19:31:44.563295       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:31:44.563358       1 policy_source.go:224] refreshing policies
	I0818 19:31:44.564859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 19:31:44.564946       1 aggregator.go:171] initial CRD sync complete...
	I0818 19:31:44.565042       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 19:31:44.565064       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 19:31:44.565070       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:31:44.567306       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0818 19:31:44.581942       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0818 19:31:45.455672       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0818 19:31:46.740787       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0818 19:31:46.869586       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0818 19:31:46.884629       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0818 19:31:46.961372       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0818 19:31:46.972095       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0818 19:31:47.855353       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0818 19:31:48.148509       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3] <==
	W0818 19:29:58.711540       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711650       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711707       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711756       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711791       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711825       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711876       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711914       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711975       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712034       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712084       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712114       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712261       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712327       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712386       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712419       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712457       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712488       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712518       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712698       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.713098       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.713746       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.714069       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.716648       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.716821       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56] <==
	I0818 19:27:31.908482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:32.135803       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:27:32.135965       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:33.346887       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-048993-m03\" does not exist"
	I0818 19:27:33.347923       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:27:33.369639       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-048993-m03" podCIDRs=["10.244.3.0/24"]
	I0818 19:27:33.369891       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:33.370006       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:33.659644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:34.006439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:37.553545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:43.511641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:52.947572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:27:52.947681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:52.963026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:57.462640       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:28:32.478621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:28:32.479120       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m03"
	I0818 19:28:32.499053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:28:32.510655       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.120692ms"
	I0818 19:28:32.511395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.86µs"
	I0818 19:28:37.530905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:28:37.549920       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:28:37.554570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:28:47.631015       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	
	
	==> kube-controller-manager [e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145] <==
	I0818 19:32:40.754400       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:32:40.754662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:32:40.766836       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:32:40.772874       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.43µs"
	I0818 19:32:40.789600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="42.615µs"
	I0818 19:32:42.902083       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:32:44.450047       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.051381ms"
	I0818 19:32:44.451186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="89.651µs"
	I0818 19:32:51.840368       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:32:58.696436       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:32:58.711930       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:32:58.943505       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:32:58.943642       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:00.054437       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:33:00.054692       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-048993-m03\" does not exist"
	I0818 19:33:00.075367       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-048993-m03" podCIDRs=["10.244.2.0/24"]
	I0818 19:33:00.076388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:00.076544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:00.476766       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:00.845264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:02.967793       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:10.424852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:18.704729       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:33:18.704797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:18.718676       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	
	
	==> kube-proxy [e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:25:04.538381       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:25:04.554254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E0818 19:25:04.554469       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:25:04.586737       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:25:04.586825       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:25:04.586866       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:25:04.590088       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:25:04.590530       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:25:04.590686       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:25:04.592650       1 config.go:197] "Starting service config controller"
	I0818 19:25:04.592731       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:25:04.592774       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:25:04.592790       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:25:04.593683       1 config.go:326] "Starting node config controller"
	I0818 19:25:04.593721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:25:04.693465       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:25:04.693589       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:25:04.693848       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:31:45.506356       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:31:45.524805       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E0818 19:31:45.524864       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:31:45.577563       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:31:45.577624       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:31:45.577651       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:31:45.583778       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:31:45.584037       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:31:45.584068       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:31:45.585868       1 config.go:197] "Starting service config controller"
	I0818 19:31:45.585912       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:31:45.585940       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:31:45.585943       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:31:45.587320       1 config.go:326] "Starting node config controller"
	I0818 19:31:45.587348       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:31:45.686661       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:31:45.686837       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:31:45.687449       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5] <==
	E0818 19:24:56.337949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.341526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 19:24:56.341573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.415417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 19:24:56.415466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.446698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 19:24:56.446862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.529984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 19:24:56.530040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.638435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 19:24:56.638490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.642234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 19:24:56.642277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.663428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:24:56.663461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.723613       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 19:24:56.723686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.768549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 19:24:56.768907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.850474       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 19:24:56.850633       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0818 19:24:59.713393       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:29:58.680876       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0818 19:29:58.681746       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0818 19:29:58.682242       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c] <==
	I0818 19:31:42.272617       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:31:44.488403       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 19:31:44.488539       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 19:31:44.488569       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:31:44.488647       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:31:44.556559       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:31:44.556633       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:31:44.571051       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:31:44.571361       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:31:44.571422       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:31:44.571465       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:31:44.671861       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 19:31:50 multinode-048993 kubelet[2966]: E0818 19:31:50.685223    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009510684358430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:31:53 multinode-048993 kubelet[2966]: I0818 19:31:53.292345    2966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 18 19:32:00 multinode-048993 kubelet[2966]: E0818 19:32:00.687541    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009520687071172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:00 multinode-048993 kubelet[2966]: E0818 19:32:00.687593    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009520687071172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:10 multinode-048993 kubelet[2966]: E0818 19:32:10.690353    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009530689858103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:10 multinode-048993 kubelet[2966]: E0818 19:32:10.690395    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009530689858103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:20 multinode-048993 kubelet[2966]: E0818 19:32:20.692875    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009540692373288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:20 multinode-048993 kubelet[2966]: E0818 19:32:20.693310    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009540692373288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:30 multinode-048993 kubelet[2966]: E0818 19:32:30.697043    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009550696704322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:30 multinode-048993 kubelet[2966]: E0818 19:32:30.697087    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009550696704322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:40 multinode-048993 kubelet[2966]: E0818 19:32:40.671346    2966 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:32:40 multinode-048993 kubelet[2966]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:32:40 multinode-048993 kubelet[2966]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:32:40 multinode-048993 kubelet[2966]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:32:40 multinode-048993 kubelet[2966]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:32:40 multinode-048993 kubelet[2966]: E0818 19:32:40.698817    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009560698629066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:40 multinode-048993 kubelet[2966]: E0818 19:32:40.698839    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009560698629066,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:50 multinode-048993 kubelet[2966]: E0818 19:32:50.702037    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009570701262829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:32:50 multinode-048993 kubelet[2966]: E0818 19:32:50.702090    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009570701262829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:33:00 multinode-048993 kubelet[2966]: E0818 19:33:00.707365    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009580704557476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:33:00 multinode-048993 kubelet[2966]: E0818 19:33:00.707820    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009580704557476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:33:10 multinode-048993 kubelet[2966]: E0818 19:33:10.713289    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009590712712964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:33:10 multinode-048993 kubelet[2966]: E0818 19:33:10.713331    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009590712712964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:33:20 multinode-048993 kubelet[2966]: E0818 19:33:20.714561    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009600714278598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:33:20 multinode-048993 kubelet[2966]: E0818 19:33:20.714586    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009600714278598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:33:21.279306   45045 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-7747/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-048993 -n multinode-048993
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-048993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 stop
E0818 19:34:26.646178   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048993 stop: exit status 82 (2m0.464111509s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-048993-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-048993 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048993 status: exit status 3 (18.725474203s)

                                                
                                                
-- stdout --
	multinode-048993
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048993-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:35:44.671638   45701 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host
	E0818 19:35:44.671673   45701 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.14:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-048993 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-048993 -n multinode-048993
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-048993 logs -n 25: (1.42706104s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m02:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993:/home/docker/cp-test_multinode-048993-m02_multinode-048993.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n multinode-048993 sudo cat                                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-048993-m02_multinode-048993.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m02:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03:/home/docker/cp-test_multinode-048993-m02_multinode-048993-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n multinode-048993-m03 sudo cat                                   | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-048993-m02_multinode-048993-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp testdata/cp-test.txt                                                | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m03:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1791348439/001/cp-test_multinode-048993-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m03:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993:/home/docker/cp-test_multinode-048993-m03_multinode-048993.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n multinode-048993 sudo cat                                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-048993-m03_multinode-048993.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-048993 cp multinode-048993-m03:/home/docker/cp-test.txt                       | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m02:/home/docker/cp-test_multinode-048993-m03_multinode-048993-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n                                                                 | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | multinode-048993-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-048993 ssh -n multinode-048993-m02 sudo cat                                   | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | /home/docker/cp-test_multinode-048993-m03_multinode-048993-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-048993 node stop m03                                                          | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	| node    | multinode-048993 node start                                                             | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-048993                                                                | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC |                     |
	| stop    | -p multinode-048993                                                                     | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC |                     |
	| start   | -p multinode-048993                                                                     | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:29 UTC | 18 Aug 24 19:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-048993                                                                | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:33 UTC |                     |
	| node    | multinode-048993 node delete                                                            | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:33 UTC | 18 Aug 24 19:33 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-048993 stop                                                                   | multinode-048993 | jenkins | v1.33.1 | 18 Aug 24 19:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 19:29:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 19:29:57.518525   43974 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:29:57.518728   43974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:29:57.518736   43974 out.go:358] Setting ErrFile to fd 2...
	I0818 19:29:57.518740   43974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:29:57.518899   43974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:29:57.519446   43974 out.go:352] Setting JSON to false
	I0818 19:29:57.520339   43974 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4341,"bootTime":1724005056,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:29:57.520390   43974 start.go:139] virtualization: kvm guest
	I0818 19:29:57.523332   43974 out.go:177] * [multinode-048993] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:29:57.524802   43974 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:29:57.524806   43974 notify.go:220] Checking for updates...
	I0818 19:29:57.526221   43974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:29:57.527651   43974 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:29:57.528970   43974 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:29:57.530113   43974 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:29:57.531364   43974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:29:57.533046   43974 config.go:182] Loaded profile config "multinode-048993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:29:57.533148   43974 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:29:57.533584   43974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:29:57.533652   43974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:29:57.548341   43974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44071
	I0818 19:29:57.548762   43974 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:29:57.549244   43974 main.go:141] libmachine: Using API Version  1
	I0818 19:29:57.549261   43974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:29:57.549578   43974 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:29:57.549846   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:29:57.583834   43974 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 19:29:57.585031   43974 start.go:297] selected driver: kvm2
	I0818 19:29:57.585048   43974 start.go:901] validating driver "kvm2" against &{Name:multinode-048993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-048993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:29:57.585175   43974 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:29:57.585505   43974 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:29:57.585589   43974 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:29:57.599736   43974 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:29:57.600414   43974 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:29:57.600485   43974 cni.go:84] Creating CNI manager for ""
	I0818 19:29:57.600500   43974 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 19:29:57.600553   43974 start.go:340] cluster config:
	{Name:multinode-048993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-048993 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:29:57.600682   43974 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:29:57.602560   43974 out.go:177] * Starting "multinode-048993" primary control-plane node in "multinode-048993" cluster
	I0818 19:29:57.603843   43974 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:29:57.603875   43974 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 19:29:57.603891   43974 cache.go:56] Caching tarball of preloaded images
	I0818 19:29:57.603979   43974 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:29:57.603995   43974 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 19:29:57.604102   43974 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/config.json ...
	I0818 19:29:57.604295   43974 start.go:360] acquireMachinesLock for multinode-048993: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:29:57.604336   43974 start.go:364] duration metric: took 23.842µs to acquireMachinesLock for "multinode-048993"
	I0818 19:29:57.604355   43974 start.go:96] Skipping create...Using existing machine configuration
	I0818 19:29:57.604364   43974 fix.go:54] fixHost starting: 
	I0818 19:29:57.604627   43974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:29:57.604657   43974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:29:57.618248   43974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33161
	I0818 19:29:57.618650   43974 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:29:57.619078   43974 main.go:141] libmachine: Using API Version  1
	I0818 19:29:57.619101   43974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:29:57.619439   43974 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:29:57.619618   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:29:57.619785   43974 main.go:141] libmachine: (multinode-048993) Calling .GetState
	I0818 19:29:57.621372   43974 fix.go:112] recreateIfNeeded on multinode-048993: state=Running err=<nil>
	W0818 19:29:57.621399   43974 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 19:29:57.623980   43974 out.go:177] * Updating the running kvm2 "multinode-048993" VM ...
	I0818 19:29:57.625184   43974 machine.go:93] provisionDockerMachine start ...
	I0818 19:29:57.625199   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:29:57.625394   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:57.628235   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.628640   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:57.628662   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.628836   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:57.629000   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.629151   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.629272   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:57.629432   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:29:57.629726   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:29:57.629742   43974 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 19:29:57.748566   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-048993
	
	I0818 19:29:57.748596   43974 main.go:141] libmachine: (multinode-048993) Calling .GetMachineName
	I0818 19:29:57.748904   43974 buildroot.go:166] provisioning hostname "multinode-048993"
	I0818 19:29:57.748928   43974 main.go:141] libmachine: (multinode-048993) Calling .GetMachineName
	I0818 19:29:57.749112   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:57.752039   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.752505   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:57.752528   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.752676   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:57.752849   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.753000   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.753146   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:57.753285   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:29:57.753485   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:29:57.753503   43974 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-048993 && echo "multinode-048993" | sudo tee /etc/hostname
	I0818 19:29:57.879922   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-048993
	
	I0818 19:29:57.879955   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:57.883044   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.883488   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:57.883527   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.883687   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:57.883858   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.884028   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:57.884214   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:57.884406   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:29:57.884569   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:29:57.884583   43974 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-048993' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-048993/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-048993' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 19:29:57.996321   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:29:57.996353   43974 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 19:29:57.996396   43974 buildroot.go:174] setting up certificates
	I0818 19:29:57.996408   43974 provision.go:84] configureAuth start
	I0818 19:29:57.996423   43974 main.go:141] libmachine: (multinode-048993) Calling .GetMachineName
	I0818 19:29:57.996664   43974 main.go:141] libmachine: (multinode-048993) Calling .GetIP
	I0818 19:29:57.999212   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.999646   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:57.999674   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:57.999810   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:58.001996   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.002408   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:58.002436   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.002560   43974 provision.go:143] copyHostCerts
	I0818 19:29:58.002590   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:29:58.002628   43974 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 19:29:58.002648   43974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:29:58.002731   43974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 19:29:58.002842   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:29:58.002867   43974 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 19:29:58.002874   43974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:29:58.002910   43974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 19:29:58.002987   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:29:58.003011   43974 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 19:29:58.003020   43974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:29:58.003053   43974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 19:29:58.003133   43974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.multinode-048993 san=[127.0.0.1 192.168.39.185 localhost minikube multinode-048993]
	I0818 19:29:58.366644   43974 provision.go:177] copyRemoteCerts
	I0818 19:29:58.366703   43974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 19:29:58.366749   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:58.369136   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.369435   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:58.369467   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.369593   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:58.369809   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:58.369963   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:58.370140   43974 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993/id_rsa Username:docker}
	I0818 19:29:58.459730   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0818 19:29:58.459809   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 19:29:58.488777   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0818 19:29:58.488858   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0818 19:29:58.519175   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0818 19:29:58.519238   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 19:29:58.544041   43974 provision.go:87] duration metric: took 547.61858ms to configureAuth
	I0818 19:29:58.544065   43974 buildroot.go:189] setting minikube options for container-runtime
	I0818 19:29:58.544282   43974 config.go:182] Loaded profile config "multinode-048993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:29:58.544380   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:29:58.547279   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.547708   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:29:58.547733   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:29:58.547916   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:29:58.548105   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:58.548402   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:29:58.548573   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:29:58.548752   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:29:58.548941   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:29:58.548959   43974 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 19:31:29.251045   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 19:31:29.251100   43974 machine.go:96] duration metric: took 1m31.62590483s to provisionDockerMachine
	I0818 19:31:29.251127   43974 start.go:293] postStartSetup for "multinode-048993" (driver="kvm2")
	I0818 19:31:29.251155   43974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 19:31:29.251191   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.251659   43974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 19:31:29.251706   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:31:29.254718   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.255369   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.255423   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.255590   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:31:29.255812   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.255997   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:31:29.256138   43974 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993/id_rsa Username:docker}
	I0818 19:31:29.347154   43974 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 19:31:29.351316   43974 command_runner.go:130] > NAME=Buildroot
	I0818 19:31:29.351332   43974 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0818 19:31:29.351338   43974 command_runner.go:130] > ID=buildroot
	I0818 19:31:29.351345   43974 command_runner.go:130] > VERSION_ID=2023.02.9
	I0818 19:31:29.351352   43974 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0818 19:31:29.351484   43974 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 19:31:29.351510   43974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 19:31:29.351568   43974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 19:31:29.351638   43974 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 19:31:29.351649   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /etc/ssl/certs/149342.pem
	I0818 19:31:29.351727   43974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 19:31:29.361136   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:31:29.388847   43974 start.go:296] duration metric: took 137.705611ms for postStartSetup
	I0818 19:31:29.388897   43974 fix.go:56] duration metric: took 1m31.784533308s for fixHost
	I0818 19:31:29.388923   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:31:29.391760   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.392114   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.392140   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.392289   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:31:29.392479   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.392671   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.392805   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:31:29.392977   43974 main.go:141] libmachine: Using SSH client type: native
	I0818 19:31:29.393143   43974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.185 22 <nil> <nil>}
	I0818 19:31:29.393152   43974 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 19:31:29.504365   43974 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724009489.482227779
	
	I0818 19:31:29.504387   43974 fix.go:216] guest clock: 1724009489.482227779
	I0818 19:31:29.504404   43974 fix.go:229] Guest: 2024-08-18 19:31:29.482227779 +0000 UTC Remote: 2024-08-18 19:31:29.388907208 +0000 UTC m=+91.905779220 (delta=93.320571ms)
	I0818 19:31:29.504458   43974 fix.go:200] guest clock delta is within tolerance: 93.320571ms
	I0818 19:31:29.504466   43974 start.go:83] releasing machines lock for "multinode-048993", held for 1m31.900116845s
	I0818 19:31:29.504496   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.504761   43974 main.go:141] libmachine: (multinode-048993) Calling .GetIP
	I0818 19:31:29.507643   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.508016   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.508044   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.508305   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.508806   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.508996   43974 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:31:29.509088   43974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 19:31:29.509138   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:31:29.509235   43974 ssh_runner.go:195] Run: cat /version.json
	I0818 19:31:29.509264   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:31:29.512122   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.512224   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.512521   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.512547   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.512574   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:29.512595   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:29.512661   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:31:29.512842   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:31:29.512848   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.513038   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:31:29.513043   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:31:29.513197   43974 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:31:29.513234   43974 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993/id_rsa Username:docker}
	I0818 19:31:29.513317   43974 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993/id_rsa Username:docker}
	I0818 19:31:29.592430   43974 command_runner.go:130] > {"iso_version": "v1.33.1-1723740674-19452", "kicbase_version": "v0.0.44-1723650208-19443", "minikube_version": "v1.33.1", "commit": "3bcdc720eef782394bf386d06fca73d1934e08fb"}
	I0818 19:31:29.592681   43974 ssh_runner.go:195] Run: systemctl --version
	I0818 19:31:29.614467   43974 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0818 19:31:29.615119   43974 command_runner.go:130] > systemd 252 (252)
	I0818 19:31:29.615156   43974 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0818 19:31:29.615221   43974 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 19:31:29.776643   43974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 19:31:29.783724   43974 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0818 19:31:29.784148   43974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 19:31:29.784235   43974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 19:31:29.793474   43974 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0818 19:31:29.793491   43974 start.go:495] detecting cgroup driver to use...
	I0818 19:31:29.793560   43974 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 19:31:29.809056   43974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 19:31:29.822374   43974 docker.go:217] disabling cri-docker service (if available) ...
	I0818 19:31:29.822440   43974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 19:31:29.836427   43974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 19:31:29.849590   43974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 19:31:29.994616   43974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 19:31:30.148204   43974 docker.go:233] disabling docker service ...
	I0818 19:31:30.148277   43974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 19:31:30.166055   43974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 19:31:30.179750   43974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 19:31:30.323754   43974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 19:31:30.465888   43974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 19:31:30.479527   43974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 19:31:30.498590   43974 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0818 19:31:30.499033   43974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 19:31:30.499081   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.509619   43974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 19:31:30.509684   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.519730   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.529701   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.539467   43974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 19:31:30.549664   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.559364   43974 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.570461   43974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:31:30.580154   43974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 19:31:30.588842   43974 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0818 19:31:30.588894   43974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 19:31:30.597682   43974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:31:30.734471   43974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 19:31:38.400824   43974 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.666319231s)
	I0818 19:31:38.400857   43974 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 19:31:38.400908   43974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 19:31:38.406571   43974 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0818 19:31:38.406589   43974 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0818 19:31:38.406595   43974 command_runner.go:130] > Device: 0,22	Inode: 1333        Links: 1
	I0818 19:31:38.406602   43974 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0818 19:31:38.406608   43974 command_runner.go:130] > Access: 2024-08-18 19:31:38.302002172 +0000
	I0818 19:31:38.406620   43974 command_runner.go:130] > Modify: 2024-08-18 19:31:38.266001388 +0000
	I0818 19:31:38.406628   43974 command_runner.go:130] > Change: 2024-08-18 19:31:38.266001388 +0000
	I0818 19:31:38.406633   43974 command_runner.go:130] >  Birth: -
	I0818 19:31:38.406923   43974 start.go:563] Will wait 60s for crictl version
	I0818 19:31:38.406973   43974 ssh_runner.go:195] Run: which crictl
	I0818 19:31:38.410934   43974 command_runner.go:130] > /usr/bin/crictl
	I0818 19:31:38.411031   43974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 19:31:38.448335   43974 command_runner.go:130] > Version:  0.1.0
	I0818 19:31:38.448357   43974 command_runner.go:130] > RuntimeName:  cri-o
	I0818 19:31:38.448364   43974 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0818 19:31:38.448371   43974 command_runner.go:130] > RuntimeApiVersion:  v1
	I0818 19:31:38.448557   43974 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 19:31:38.448639   43974 ssh_runner.go:195] Run: crio --version
	I0818 19:31:38.476483   43974 command_runner.go:130] > crio version 1.29.1
	I0818 19:31:38.476504   43974 command_runner.go:130] > Version:        1.29.1
	I0818 19:31:38.476513   43974 command_runner.go:130] > GitCommit:      unknown
	I0818 19:31:38.476519   43974 command_runner.go:130] > GitCommitDate:  unknown
	I0818 19:31:38.476525   43974 command_runner.go:130] > GitTreeState:   clean
	I0818 19:31:38.476532   43974 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0818 19:31:38.476538   43974 command_runner.go:130] > GoVersion:      go1.21.6
	I0818 19:31:38.476544   43974 command_runner.go:130] > Compiler:       gc
	I0818 19:31:38.476551   43974 command_runner.go:130] > Platform:       linux/amd64
	I0818 19:31:38.476556   43974 command_runner.go:130] > Linkmode:       dynamic
	I0818 19:31:38.476564   43974 command_runner.go:130] > BuildTags:      
	I0818 19:31:38.476571   43974 command_runner.go:130] >   containers_image_ostree_stub
	I0818 19:31:38.476581   43974 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0818 19:31:38.476591   43974 command_runner.go:130] >   btrfs_noversion
	I0818 19:31:38.476600   43974 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0818 19:31:38.476613   43974 command_runner.go:130] >   libdm_no_deferred_remove
	I0818 19:31:38.476623   43974 command_runner.go:130] >   seccomp
	I0818 19:31:38.476634   43974 command_runner.go:130] > LDFlags:          unknown
	I0818 19:31:38.476642   43974 command_runner.go:130] > SeccompEnabled:   true
	I0818 19:31:38.476649   43974 command_runner.go:130] > AppArmorEnabled:  false
	I0818 19:31:38.476721   43974 ssh_runner.go:195] Run: crio --version
	I0818 19:31:38.505411   43974 command_runner.go:130] > crio version 1.29.1
	I0818 19:31:38.505436   43974 command_runner.go:130] > Version:        1.29.1
	I0818 19:31:38.505443   43974 command_runner.go:130] > GitCommit:      unknown
	I0818 19:31:38.505447   43974 command_runner.go:130] > GitCommitDate:  unknown
	I0818 19:31:38.505452   43974 command_runner.go:130] > GitTreeState:   clean
	I0818 19:31:38.505460   43974 command_runner.go:130] > BuildDate:      2024-08-15T22:11:01Z
	I0818 19:31:38.505467   43974 command_runner.go:130] > GoVersion:      go1.21.6
	I0818 19:31:38.505474   43974 command_runner.go:130] > Compiler:       gc
	I0818 19:31:38.505481   43974 command_runner.go:130] > Platform:       linux/amd64
	I0818 19:31:38.505487   43974 command_runner.go:130] > Linkmode:       dynamic
	I0818 19:31:38.505496   43974 command_runner.go:130] > BuildTags:      
	I0818 19:31:38.505520   43974 command_runner.go:130] >   containers_image_ostree_stub
	I0818 19:31:38.505528   43974 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0818 19:31:38.505532   43974 command_runner.go:130] >   btrfs_noversion
	I0818 19:31:38.505536   43974 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0818 19:31:38.505543   43974 command_runner.go:130] >   libdm_no_deferred_remove
	I0818 19:31:38.505552   43974 command_runner.go:130] >   seccomp
	I0818 19:31:38.505562   43974 command_runner.go:130] > LDFlags:          unknown
	I0818 19:31:38.505572   43974 command_runner.go:130] > SeccompEnabled:   true
	I0818 19:31:38.505583   43974 command_runner.go:130] > AppArmorEnabled:  false
	I0818 19:31:38.508632   43974 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 19:31:38.509980   43974 main.go:141] libmachine: (multinode-048993) Calling .GetIP
	I0818 19:31:38.512741   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:38.513180   43974 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:31:38.513214   43974 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:31:38.513428   43974 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 19:31:38.518811   43974 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0818 19:31:38.518915   43974 kubeadm.go:883] updating cluster {Name:multinode-048993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-048993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 19:31:38.519056   43974 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:31:38.519098   43974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:31:38.567557   43974 command_runner.go:130] > {
	I0818 19:31:38.567583   43974 command_runner.go:130] >   "images": [
	I0818 19:31:38.567587   43974 command_runner.go:130] >     {
	I0818 19:31:38.567595   43974 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0818 19:31:38.567602   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567608   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0818 19:31:38.567611   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567615   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567624   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0818 19:31:38.567631   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0818 19:31:38.567634   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567639   43974 command_runner.go:130] >       "size": "87165492",
	I0818 19:31:38.567643   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567647   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.567652   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567656   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567660   43974 command_runner.go:130] >     },
	I0818 19:31:38.567664   43974 command_runner.go:130] >     {
	I0818 19:31:38.567669   43974 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0818 19:31:38.567674   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567679   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0818 19:31:38.567686   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567690   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567697   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0818 19:31:38.567707   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0818 19:31:38.567710   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567714   43974 command_runner.go:130] >       "size": "87190579",
	I0818 19:31:38.567718   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567725   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.567729   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567733   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567738   43974 command_runner.go:130] >     },
	I0818 19:31:38.567741   43974 command_runner.go:130] >     {
	I0818 19:31:38.567747   43974 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0818 19:31:38.567753   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567758   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0818 19:31:38.567761   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567765   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567774   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0818 19:31:38.567781   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0818 19:31:38.567785   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567790   43974 command_runner.go:130] >       "size": "1363676",
	I0818 19:31:38.567796   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567800   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.567806   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567810   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567813   43974 command_runner.go:130] >     },
	I0818 19:31:38.567816   43974 command_runner.go:130] >     {
	I0818 19:31:38.567822   43974 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0818 19:31:38.567829   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567835   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0818 19:31:38.567841   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567844   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567852   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0818 19:31:38.567864   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0818 19:31:38.567870   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567874   43974 command_runner.go:130] >       "size": "31470524",
	I0818 19:31:38.567879   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567882   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.567887   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567893   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567897   43974 command_runner.go:130] >     },
	I0818 19:31:38.567902   43974 command_runner.go:130] >     {
	I0818 19:31:38.567908   43974 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0818 19:31:38.567915   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.567920   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0818 19:31:38.567924   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567929   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.567938   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0818 19:31:38.567945   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0818 19:31:38.567951   43974 command_runner.go:130] >       ],
	I0818 19:31:38.567955   43974 command_runner.go:130] >       "size": "61245718",
	I0818 19:31:38.567959   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.567963   43974 command_runner.go:130] >       "username": "nonroot",
	I0818 19:31:38.567967   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.567972   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.567977   43974 command_runner.go:130] >     },
	I0818 19:31:38.567982   43974 command_runner.go:130] >     {
	I0818 19:31:38.567989   43974 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0818 19:31:38.567995   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568001   43974 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0818 19:31:38.568006   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568011   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568019   43974 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0818 19:31:38.568026   43974 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0818 19:31:38.568032   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568036   43974 command_runner.go:130] >       "size": "149009664",
	I0818 19:31:38.568040   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568047   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.568051   43974 command_runner.go:130] >       },
	I0818 19:31:38.568055   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568060   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568065   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568070   43974 command_runner.go:130] >     },
	I0818 19:31:38.568073   43974 command_runner.go:130] >     {
	I0818 19:31:38.568079   43974 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0818 19:31:38.568084   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568089   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0818 19:31:38.568095   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568099   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568106   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0818 19:31:38.568115   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0818 19:31:38.568118   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568122   43974 command_runner.go:130] >       "size": "95233506",
	I0818 19:31:38.568126   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568130   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.568134   43974 command_runner.go:130] >       },
	I0818 19:31:38.568138   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568142   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568146   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568149   43974 command_runner.go:130] >     },
	I0818 19:31:38.568153   43974 command_runner.go:130] >     {
	I0818 19:31:38.568159   43974 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0818 19:31:38.568174   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568181   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0818 19:31:38.568185   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568189   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568205   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0818 19:31:38.568215   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0818 19:31:38.568218   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568223   43974 command_runner.go:130] >       "size": "89437512",
	I0818 19:31:38.568227   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568231   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.568236   43974 command_runner.go:130] >       },
	I0818 19:31:38.568240   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568244   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568247   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568250   43974 command_runner.go:130] >     },
	I0818 19:31:38.568254   43974 command_runner.go:130] >     {
	I0818 19:31:38.568259   43974 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0818 19:31:38.568263   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568267   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0818 19:31:38.568271   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568274   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568281   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0818 19:31:38.568288   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0818 19:31:38.568291   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568295   43974 command_runner.go:130] >       "size": "92728217",
	I0818 19:31:38.568299   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.568302   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568308   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568312   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568317   43974 command_runner.go:130] >     },
	I0818 19:31:38.568320   43974 command_runner.go:130] >     {
	I0818 19:31:38.568326   43974 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0818 19:31:38.568332   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568336   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0818 19:31:38.568342   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568346   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568353   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0818 19:31:38.568362   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0818 19:31:38.568365   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568370   43974 command_runner.go:130] >       "size": "68420936",
	I0818 19:31:38.568375   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568379   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.568385   43974 command_runner.go:130] >       },
	I0818 19:31:38.568388   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568392   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568396   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.568399   43974 command_runner.go:130] >     },
	I0818 19:31:38.568403   43974 command_runner.go:130] >     {
	I0818 19:31:38.568410   43974 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0818 19:31:38.568414   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.568419   43974 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0818 19:31:38.568424   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568428   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.568434   43974 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0818 19:31:38.568443   43974 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0818 19:31:38.568447   43974 command_runner.go:130] >       ],
	I0818 19:31:38.568451   43974 command_runner.go:130] >       "size": "742080",
	I0818 19:31:38.568455   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.568459   43974 command_runner.go:130] >         "value": "65535"
	I0818 19:31:38.568462   43974 command_runner.go:130] >       },
	I0818 19:31:38.568466   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.568472   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.568476   43974 command_runner.go:130] >       "pinned": true
	I0818 19:31:38.568479   43974 command_runner.go:130] >     }
	I0818 19:31:38.568482   43974 command_runner.go:130] >   ]
	I0818 19:31:38.568485   43974 command_runner.go:130] > }
	I0818 19:31:38.569253   43974 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:31:38.569269   43974 crio.go:433] Images already preloaded, skipping extraction
	I0818 19:31:38.569318   43974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:31:38.604688   43974 command_runner.go:130] > {
	I0818 19:31:38.604717   43974 command_runner.go:130] >   "images": [
	I0818 19:31:38.604723   43974 command_runner.go:130] >     {
	I0818 19:31:38.604735   43974 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0818 19:31:38.604743   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.604752   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0818 19:31:38.604758   43974 command_runner.go:130] >       ],
	I0818 19:31:38.604764   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.604776   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0818 19:31:38.604787   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0818 19:31:38.604793   43974 command_runner.go:130] >       ],
	I0818 19:31:38.604801   43974 command_runner.go:130] >       "size": "87165492",
	I0818 19:31:38.604810   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.604816   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.604826   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.604836   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.604841   43974 command_runner.go:130] >     },
	I0818 19:31:38.604847   43974 command_runner.go:130] >     {
	I0818 19:31:38.604856   43974 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0818 19:31:38.604867   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.604903   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0818 19:31:38.604914   43974 command_runner.go:130] >       ],
	I0818 19:31:38.604920   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.604932   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0818 19:31:38.604947   43974 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0818 19:31:38.604956   43974 command_runner.go:130] >       ],
	I0818 19:31:38.604963   43974 command_runner.go:130] >       "size": "87190579",
	I0818 19:31:38.604972   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.604992   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605002   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605011   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605019   43974 command_runner.go:130] >     },
	I0818 19:31:38.605027   43974 command_runner.go:130] >     {
	I0818 19:31:38.605037   43974 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0818 19:31:38.605047   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605059   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0818 19:31:38.605068   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605076   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605090   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0818 19:31:38.605103   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0818 19:31:38.605107   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605112   43974 command_runner.go:130] >       "size": "1363676",
	I0818 19:31:38.605115   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.605119   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605123   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605130   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605134   43974 command_runner.go:130] >     },
	I0818 19:31:38.605139   43974 command_runner.go:130] >     {
	I0818 19:31:38.605145   43974 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0818 19:31:38.605152   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605157   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0818 19:31:38.605163   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605168   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605178   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0818 19:31:38.605190   43974 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0818 19:31:38.605196   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605200   43974 command_runner.go:130] >       "size": "31470524",
	I0818 19:31:38.605207   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.605219   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605225   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605229   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605235   43974 command_runner.go:130] >     },
	I0818 19:31:38.605238   43974 command_runner.go:130] >     {
	I0818 19:31:38.605246   43974 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0818 19:31:38.605250   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605262   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0818 19:31:38.605270   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605280   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605294   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0818 19:31:38.605308   43974 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0818 19:31:38.605316   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605326   43974 command_runner.go:130] >       "size": "61245718",
	I0818 19:31:38.605333   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.605339   43974 command_runner.go:130] >       "username": "nonroot",
	I0818 19:31:38.605346   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605350   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605356   43974 command_runner.go:130] >     },
	I0818 19:31:38.605361   43974 command_runner.go:130] >     {
	I0818 19:31:38.605369   43974 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0818 19:31:38.605376   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605381   43974 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0818 19:31:38.605386   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605391   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605399   43974 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0818 19:31:38.605408   43974 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0818 19:31:38.605415   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605419   43974 command_runner.go:130] >       "size": "149009664",
	I0818 19:31:38.605426   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605430   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.605436   43974 command_runner.go:130] >       },
	I0818 19:31:38.605441   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605446   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605450   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605456   43974 command_runner.go:130] >     },
	I0818 19:31:38.605459   43974 command_runner.go:130] >     {
	I0818 19:31:38.605468   43974 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0818 19:31:38.605474   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605479   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0818 19:31:38.605484   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605488   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605497   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0818 19:31:38.605508   43974 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0818 19:31:38.605514   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605518   43974 command_runner.go:130] >       "size": "95233506",
	I0818 19:31:38.605523   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605528   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.605533   43974 command_runner.go:130] >       },
	I0818 19:31:38.605537   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605543   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605548   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605553   43974 command_runner.go:130] >     },
	I0818 19:31:38.605557   43974 command_runner.go:130] >     {
	I0818 19:31:38.605565   43974 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0818 19:31:38.605572   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605577   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0818 19:31:38.605583   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605587   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605603   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0818 19:31:38.605613   43974 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0818 19:31:38.605617   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605620   43974 command_runner.go:130] >       "size": "89437512",
	I0818 19:31:38.605624   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605627   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.605631   43974 command_runner.go:130] >       },
	I0818 19:31:38.605635   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605639   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605642   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605646   43974 command_runner.go:130] >     },
	I0818 19:31:38.605649   43974 command_runner.go:130] >     {
	I0818 19:31:38.605655   43974 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0818 19:31:38.605658   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605664   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0818 19:31:38.605667   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605672   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605679   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0818 19:31:38.605687   43974 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0818 19:31:38.605693   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605697   43974 command_runner.go:130] >       "size": "92728217",
	I0818 19:31:38.605702   43974 command_runner.go:130] >       "uid": null,
	I0818 19:31:38.605707   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605712   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605716   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605719   43974 command_runner.go:130] >     },
	I0818 19:31:38.605724   43974 command_runner.go:130] >     {
	I0818 19:31:38.605730   43974 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0818 19:31:38.605734   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605739   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0818 19:31:38.605743   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605747   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605756   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0818 19:31:38.605765   43974 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0818 19:31:38.605771   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605775   43974 command_runner.go:130] >       "size": "68420936",
	I0818 19:31:38.605781   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605784   43974 command_runner.go:130] >         "value": "0"
	I0818 19:31:38.605790   43974 command_runner.go:130] >       },
	I0818 19:31:38.605794   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605800   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605804   43974 command_runner.go:130] >       "pinned": false
	I0818 19:31:38.605809   43974 command_runner.go:130] >     },
	I0818 19:31:38.605813   43974 command_runner.go:130] >     {
	I0818 19:31:38.605821   43974 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0818 19:31:38.605825   43974 command_runner.go:130] >       "repoTags": [
	I0818 19:31:38.605832   43974 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0818 19:31:38.605835   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605839   43974 command_runner.go:130] >       "repoDigests": [
	I0818 19:31:38.605846   43974 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0818 19:31:38.605855   43974 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0818 19:31:38.605861   43974 command_runner.go:130] >       ],
	I0818 19:31:38.605865   43974 command_runner.go:130] >       "size": "742080",
	I0818 19:31:38.605871   43974 command_runner.go:130] >       "uid": {
	I0818 19:31:38.605877   43974 command_runner.go:130] >         "value": "65535"
	I0818 19:31:38.605882   43974 command_runner.go:130] >       },
	I0818 19:31:38.605886   43974 command_runner.go:130] >       "username": "",
	I0818 19:31:38.605892   43974 command_runner.go:130] >       "spec": null,
	I0818 19:31:38.605896   43974 command_runner.go:130] >       "pinned": true
	I0818 19:31:38.605902   43974 command_runner.go:130] >     }
	I0818 19:31:38.605905   43974 command_runner.go:130] >   ]
	I0818 19:31:38.605909   43974 command_runner.go:130] > }
	I0818 19:31:38.606062   43974 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:31:38.606077   43974 cache_images.go:84] Images are preloaded, skipping loading
	I0818 19:31:38.606089   43974 kubeadm.go:934] updating node { 192.168.39.185 8443 v1.31.0 crio true true} ...
	I0818 19:31:38.606223   43974 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-048993 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.185
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-048993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 19:31:38.606305   43974 ssh_runner.go:195] Run: crio config
	I0818 19:31:38.639285   43974 command_runner.go:130] ! time="2024-08-18 19:31:38.616742718Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0818 19:31:38.645727   43974 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0818 19:31:38.653313   43974 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0818 19:31:38.653341   43974 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0818 19:31:38.653351   43974 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0818 19:31:38.653356   43974 command_runner.go:130] > #
	I0818 19:31:38.653378   43974 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0818 19:31:38.653391   43974 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0818 19:31:38.653404   43974 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0818 19:31:38.653417   43974 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0818 19:31:38.653425   43974 command_runner.go:130] > # reload'.
	I0818 19:31:38.653435   43974 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0818 19:31:38.653448   43974 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0818 19:31:38.653460   43974 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0818 19:31:38.653471   43974 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0818 19:31:38.653480   43974 command_runner.go:130] > [crio]
	I0818 19:31:38.653492   43974 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0818 19:31:38.653502   43974 command_runner.go:130] > # containers images, in this directory.
	I0818 19:31:38.653512   43974 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0818 19:31:38.653539   43974 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0818 19:31:38.653550   43974 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0818 19:31:38.653561   43974 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0818 19:31:38.653570   43974 command_runner.go:130] > # imagestore = ""
	I0818 19:31:38.653583   43974 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0818 19:31:38.653596   43974 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0818 19:31:38.653606   43974 command_runner.go:130] > storage_driver = "overlay"
	I0818 19:31:38.653618   43974 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0818 19:31:38.653633   43974 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0818 19:31:38.653641   43974 command_runner.go:130] > storage_option = [
	I0818 19:31:38.653649   43974 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0818 19:31:38.653652   43974 command_runner.go:130] > ]
	I0818 19:31:38.653660   43974 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0818 19:31:38.653668   43974 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0818 19:31:38.653675   43974 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0818 19:31:38.653681   43974 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0818 19:31:38.653689   43974 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0818 19:31:38.653696   43974 command_runner.go:130] > # always happen on a node reboot
	I0818 19:31:38.653700   43974 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0818 19:31:38.653712   43974 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0818 19:31:38.653719   43974 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0818 19:31:38.653726   43974 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0818 19:31:38.653731   43974 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0818 19:31:38.653740   43974 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0818 19:31:38.653749   43974 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0818 19:31:38.653755   43974 command_runner.go:130] > # internal_wipe = true
	I0818 19:31:38.653763   43974 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0818 19:31:38.653770   43974 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0818 19:31:38.653775   43974 command_runner.go:130] > # internal_repair = false
	I0818 19:31:38.653782   43974 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0818 19:31:38.653791   43974 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0818 19:31:38.653798   43974 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0818 19:31:38.653803   43974 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0818 19:31:38.653811   43974 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0818 19:31:38.653817   43974 command_runner.go:130] > [crio.api]
	I0818 19:31:38.653823   43974 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0818 19:31:38.653829   43974 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0818 19:31:38.653835   43974 command_runner.go:130] > # IP address on which the stream server will listen.
	I0818 19:31:38.653841   43974 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0818 19:31:38.653847   43974 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0818 19:31:38.653854   43974 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0818 19:31:38.653857   43974 command_runner.go:130] > # stream_port = "0"
	I0818 19:31:38.653862   43974 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0818 19:31:38.653868   43974 command_runner.go:130] > # stream_enable_tls = false
	I0818 19:31:38.653874   43974 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0818 19:31:38.653880   43974 command_runner.go:130] > # stream_idle_timeout = ""
	I0818 19:31:38.653886   43974 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0818 19:31:38.653893   43974 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0818 19:31:38.653898   43974 command_runner.go:130] > # minutes.
	I0818 19:31:38.653902   43974 command_runner.go:130] > # stream_tls_cert = ""
	I0818 19:31:38.653910   43974 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0818 19:31:38.653918   43974 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0818 19:31:38.653924   43974 command_runner.go:130] > # stream_tls_key = ""
	I0818 19:31:38.653929   43974 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0818 19:31:38.653937   43974 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0818 19:31:38.653951   43974 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0818 19:31:38.653958   43974 command_runner.go:130] > # stream_tls_ca = ""
	I0818 19:31:38.653965   43974 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0818 19:31:38.653971   43974 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0818 19:31:38.653979   43974 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0818 19:31:38.653987   43974 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0818 19:31:38.653995   43974 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0818 19:31:38.654003   43974 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0818 19:31:38.654007   43974 command_runner.go:130] > [crio.runtime]
	I0818 19:31:38.654013   43974 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0818 19:31:38.654019   43974 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0818 19:31:38.654023   43974 command_runner.go:130] > # "nofile=1024:2048"
	I0818 19:31:38.654031   43974 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0818 19:31:38.654035   43974 command_runner.go:130] > # default_ulimits = [
	I0818 19:31:38.654041   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654047   43974 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0818 19:31:38.654053   43974 command_runner.go:130] > # no_pivot = false
	I0818 19:31:38.654059   43974 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0818 19:31:38.654067   43974 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0818 19:31:38.654074   43974 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0818 19:31:38.654079   43974 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0818 19:31:38.654086   43974 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0818 19:31:38.654092   43974 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0818 19:31:38.654099   43974 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0818 19:31:38.654103   43974 command_runner.go:130] > # Cgroup setting for conmon
	I0818 19:31:38.654111   43974 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0818 19:31:38.654119   43974 command_runner.go:130] > conmon_cgroup = "pod"
	I0818 19:31:38.654128   43974 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0818 19:31:38.654133   43974 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0818 19:31:38.654142   43974 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0818 19:31:38.654148   43974 command_runner.go:130] > conmon_env = [
	I0818 19:31:38.654154   43974 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0818 19:31:38.654159   43974 command_runner.go:130] > ]
	I0818 19:31:38.654164   43974 command_runner.go:130] > # Additional environment variables to set for all the
	I0818 19:31:38.654171   43974 command_runner.go:130] > # containers. These are overridden if set in the
	I0818 19:31:38.654177   43974 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0818 19:31:38.654183   43974 command_runner.go:130] > # default_env = [
	I0818 19:31:38.654186   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654192   43974 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0818 19:31:38.654203   43974 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0818 19:31:38.654210   43974 command_runner.go:130] > # selinux = false
	I0818 19:31:38.654216   43974 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0818 19:31:38.654223   43974 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0818 19:31:38.654229   43974 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0818 19:31:38.654235   43974 command_runner.go:130] > # seccomp_profile = ""
	I0818 19:31:38.654240   43974 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0818 19:31:38.654247   43974 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0818 19:31:38.654257   43974 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0818 19:31:38.654266   43974 command_runner.go:130] > # which might increase security.
	I0818 19:31:38.654277   43974 command_runner.go:130] > # This option is currently deprecated,
	I0818 19:31:38.654290   43974 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0818 19:31:38.654301   43974 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0818 19:31:38.654314   43974 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0818 19:31:38.654326   43974 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0818 19:31:38.654339   43974 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0818 19:31:38.654351   43974 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0818 19:31:38.654361   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.654372   43974 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0818 19:31:38.654380   43974 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0818 19:31:38.654387   43974 command_runner.go:130] > # the cgroup blockio controller.
	I0818 19:31:38.654392   43974 command_runner.go:130] > # blockio_config_file = ""
	I0818 19:31:38.654400   43974 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0818 19:31:38.654406   43974 command_runner.go:130] > # blockio parameters.
	I0818 19:31:38.654410   43974 command_runner.go:130] > # blockio_reload = false
	I0818 19:31:38.654418   43974 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0818 19:31:38.654421   43974 command_runner.go:130] > # irqbalance daemon.
	I0818 19:31:38.654428   43974 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0818 19:31:38.654435   43974 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0818 19:31:38.654443   43974 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0818 19:31:38.654451   43974 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0818 19:31:38.654459   43974 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0818 19:31:38.654465   43974 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0818 19:31:38.654472   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.654476   43974 command_runner.go:130] > # rdt_config_file = ""
	I0818 19:31:38.654482   43974 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0818 19:31:38.654488   43974 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0818 19:31:38.654502   43974 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0818 19:31:38.654509   43974 command_runner.go:130] > # separate_pull_cgroup = ""
	I0818 19:31:38.654515   43974 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0818 19:31:38.654523   43974 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0818 19:31:38.654529   43974 command_runner.go:130] > # will be added.
	I0818 19:31:38.654533   43974 command_runner.go:130] > # default_capabilities = [
	I0818 19:31:38.654539   43974 command_runner.go:130] > # 	"CHOWN",
	I0818 19:31:38.654544   43974 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0818 19:31:38.654549   43974 command_runner.go:130] > # 	"FSETID",
	I0818 19:31:38.654554   43974 command_runner.go:130] > # 	"FOWNER",
	I0818 19:31:38.654560   43974 command_runner.go:130] > # 	"SETGID",
	I0818 19:31:38.654564   43974 command_runner.go:130] > # 	"SETUID",
	I0818 19:31:38.654570   43974 command_runner.go:130] > # 	"SETPCAP",
	I0818 19:31:38.654574   43974 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0818 19:31:38.654581   43974 command_runner.go:130] > # 	"KILL",
	I0818 19:31:38.654585   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654594   43974 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0818 19:31:38.654602   43974 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0818 19:31:38.654609   43974 command_runner.go:130] > # add_inheritable_capabilities = false
	I0818 19:31:38.654615   43974 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0818 19:31:38.654622   43974 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0818 19:31:38.654626   43974 command_runner.go:130] > default_sysctls = [
	I0818 19:31:38.654631   43974 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0818 19:31:38.654638   43974 command_runner.go:130] > ]
	I0818 19:31:38.654643   43974 command_runner.go:130] > # List of devices on the host that a
	I0818 19:31:38.654651   43974 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0818 19:31:38.654655   43974 command_runner.go:130] > # allowed_devices = [
	I0818 19:31:38.654659   43974 command_runner.go:130] > # 	"/dev/fuse",
	I0818 19:31:38.654662   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654666   43974 command_runner.go:130] > # List of additional devices. specified as
	I0818 19:31:38.654676   43974 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0818 19:31:38.654683   43974 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0818 19:31:38.654688   43974 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0818 19:31:38.654694   43974 command_runner.go:130] > # additional_devices = [
	I0818 19:31:38.654698   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654704   43974 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0818 19:31:38.654710   43974 command_runner.go:130] > # cdi_spec_dirs = [
	I0818 19:31:38.654714   43974 command_runner.go:130] > # 	"/etc/cdi",
	I0818 19:31:38.654720   43974 command_runner.go:130] > # 	"/var/run/cdi",
	I0818 19:31:38.654723   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654731   43974 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0818 19:31:38.654738   43974 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0818 19:31:38.654745   43974 command_runner.go:130] > # Defaults to false.
	I0818 19:31:38.654749   43974 command_runner.go:130] > # device_ownership_from_security_context = false
	I0818 19:31:38.654757   43974 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0818 19:31:38.654764   43974 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0818 19:31:38.654768   43974 command_runner.go:130] > # hooks_dir = [
	I0818 19:31:38.654773   43974 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0818 19:31:38.654779   43974 command_runner.go:130] > # ]
	I0818 19:31:38.654785   43974 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0818 19:31:38.654792   43974 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0818 19:31:38.654799   43974 command_runner.go:130] > # its default mounts from the following two files:
	I0818 19:31:38.654802   43974 command_runner.go:130] > #
	I0818 19:31:38.654808   43974 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0818 19:31:38.654816   43974 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0818 19:31:38.654822   43974 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0818 19:31:38.654827   43974 command_runner.go:130] > #
	I0818 19:31:38.654833   43974 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0818 19:31:38.654839   43974 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0818 19:31:38.654849   43974 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0818 19:31:38.654855   43974 command_runner.go:130] > #      only add mounts it finds in this file.
	I0818 19:31:38.654858   43974 command_runner.go:130] > #
	I0818 19:31:38.654863   43974 command_runner.go:130] > # default_mounts_file = ""
	I0818 19:31:38.654870   43974 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0818 19:31:38.654876   43974 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0818 19:31:38.654882   43974 command_runner.go:130] > pids_limit = 1024
	I0818 19:31:38.654889   43974 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0818 19:31:38.654897   43974 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0818 19:31:38.654903   43974 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0818 19:31:38.654913   43974 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0818 19:31:38.654919   43974 command_runner.go:130] > # log_size_max = -1
	I0818 19:31:38.654925   43974 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0818 19:31:38.654931   43974 command_runner.go:130] > # log_to_journald = false
	I0818 19:31:38.654937   43974 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0818 19:31:38.654944   43974 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0818 19:31:38.654949   43974 command_runner.go:130] > # Path to directory for container attach sockets.
	I0818 19:31:38.654956   43974 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0818 19:31:38.654962   43974 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0818 19:31:38.654968   43974 command_runner.go:130] > # bind_mount_prefix = ""
	I0818 19:31:38.654974   43974 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0818 19:31:38.654979   43974 command_runner.go:130] > # read_only = false
	I0818 19:31:38.654985   43974 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0818 19:31:38.654994   43974 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0818 19:31:38.655000   43974 command_runner.go:130] > # live configuration reload.
	I0818 19:31:38.655005   43974 command_runner.go:130] > # log_level = "info"
	I0818 19:31:38.655013   43974 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0818 19:31:38.655019   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.655025   43974 command_runner.go:130] > # log_filter = ""
	I0818 19:31:38.655032   43974 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0818 19:31:38.655041   43974 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0818 19:31:38.655046   43974 command_runner.go:130] > # separated by comma.
	I0818 19:31:38.655053   43974 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0818 19:31:38.655059   43974 command_runner.go:130] > # uid_mappings = ""
	I0818 19:31:38.655065   43974 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0818 19:31:38.655073   43974 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0818 19:31:38.655078   43974 command_runner.go:130] > # separated by comma.
	I0818 19:31:38.655085   43974 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0818 19:31:38.655095   43974 command_runner.go:130] > # gid_mappings = ""
	I0818 19:31:38.655103   43974 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0818 19:31:38.655109   43974 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0818 19:31:38.655118   43974 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0818 19:31:38.655128   43974 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0818 19:31:38.655135   43974 command_runner.go:130] > # minimum_mappable_uid = -1
	I0818 19:31:38.655141   43974 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0818 19:31:38.655149   43974 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0818 19:31:38.655154   43974 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0818 19:31:38.655164   43974 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0818 19:31:38.655170   43974 command_runner.go:130] > # minimum_mappable_gid = -1
	I0818 19:31:38.655176   43974 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0818 19:31:38.655184   43974 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0818 19:31:38.655192   43974 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0818 19:31:38.655196   43974 command_runner.go:130] > # ctr_stop_timeout = 30
	I0818 19:31:38.655202   43974 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0818 19:31:38.655210   43974 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0818 19:31:38.655215   43974 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0818 19:31:38.655222   43974 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0818 19:31:38.655226   43974 command_runner.go:130] > drop_infra_ctr = false
	I0818 19:31:38.655233   43974 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0818 19:31:38.655239   43974 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0818 19:31:38.655249   43974 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0818 19:31:38.655256   43974 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0818 19:31:38.655266   43974 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0818 19:31:38.655278   43974 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0818 19:31:38.655294   43974 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0818 19:31:38.655305   43974 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0818 19:31:38.655313   43974 command_runner.go:130] > # shared_cpuset = ""
	I0818 19:31:38.655325   43974 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0818 19:31:38.655335   43974 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0818 19:31:38.655343   43974 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0818 19:31:38.655349   43974 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0818 19:31:38.655356   43974 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0818 19:31:38.655361   43974 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0818 19:31:38.655373   43974 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0818 19:31:38.655397   43974 command_runner.go:130] > # enable_criu_support = false
	I0818 19:31:38.655409   43974 command_runner.go:130] > # Enable/disable the generation of the container,
	I0818 19:31:38.655419   43974 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0818 19:31:38.655425   43974 command_runner.go:130] > # enable_pod_events = false
	I0818 19:31:38.655431   43974 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0818 19:31:38.655440   43974 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0818 19:31:38.655446   43974 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0818 19:31:38.655452   43974 command_runner.go:130] > # default_runtime = "runc"
	I0818 19:31:38.655457   43974 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0818 19:31:38.655466   43974 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0818 19:31:38.655477   43974 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0818 19:31:38.655484   43974 command_runner.go:130] > # creation as a file is not desired either.
	I0818 19:31:38.655491   43974 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0818 19:31:38.655498   43974 command_runner.go:130] > # the hostname is being managed dynamically.
	I0818 19:31:38.655503   43974 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0818 19:31:38.655509   43974 command_runner.go:130] > # ]
	I0818 19:31:38.655514   43974 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0818 19:31:38.655523   43974 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0818 19:31:38.655531   43974 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0818 19:31:38.655538   43974 command_runner.go:130] > # Each entry in the table should follow the format:
	I0818 19:31:38.655541   43974 command_runner.go:130] > #
	I0818 19:31:38.655546   43974 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0818 19:31:38.655553   43974 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0818 19:31:38.655572   43974 command_runner.go:130] > # runtime_type = "oci"
	I0818 19:31:38.655579   43974 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0818 19:31:38.655584   43974 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0818 19:31:38.655590   43974 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0818 19:31:38.655595   43974 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0818 19:31:38.655601   43974 command_runner.go:130] > # monitor_env = []
	I0818 19:31:38.655605   43974 command_runner.go:130] > # privileged_without_host_devices = false
	I0818 19:31:38.655611   43974 command_runner.go:130] > # allowed_annotations = []
	I0818 19:31:38.655617   43974 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0818 19:31:38.655622   43974 command_runner.go:130] > # Where:
	I0818 19:31:38.655627   43974 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0818 19:31:38.655635   43974 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0818 19:31:38.655643   43974 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0818 19:31:38.655651   43974 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0818 19:31:38.655654   43974 command_runner.go:130] > #   in $PATH.
	I0818 19:31:38.655661   43974 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0818 19:31:38.655668   43974 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0818 19:31:38.655673   43974 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0818 19:31:38.655679   43974 command_runner.go:130] > #   state.
	I0818 19:31:38.655685   43974 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0818 19:31:38.655693   43974 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0818 19:31:38.655700   43974 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0818 19:31:38.655705   43974 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0818 19:31:38.655713   43974 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0818 19:31:38.655719   43974 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0818 19:31:38.655727   43974 command_runner.go:130] > #   The currently recognized values are:
	I0818 19:31:38.655732   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0818 19:31:38.655741   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0818 19:31:38.655748   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0818 19:31:38.655754   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0818 19:31:38.655763   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0818 19:31:38.655769   43974 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0818 19:31:38.655777   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0818 19:31:38.655785   43974 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0818 19:31:38.655791   43974 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0818 19:31:38.655799   43974 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0818 19:31:38.655806   43974 command_runner.go:130] > #   deprecated option "conmon".
	I0818 19:31:38.655812   43974 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0818 19:31:38.655819   43974 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0818 19:31:38.655825   43974 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0818 19:31:38.655832   43974 command_runner.go:130] > #   should be moved to the container's cgroup
	I0818 19:31:38.655838   43974 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0818 19:31:38.655845   43974 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0818 19:31:38.655851   43974 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0818 19:31:38.655858   43974 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0818 19:31:38.655861   43974 command_runner.go:130] > #
	I0818 19:31:38.655866   43974 command_runner.go:130] > # Using the seccomp notifier feature:
	I0818 19:31:38.655871   43974 command_runner.go:130] > #
	I0818 19:31:38.655876   43974 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0818 19:31:38.655884   43974 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0818 19:31:38.655888   43974 command_runner.go:130] > #
	I0818 19:31:38.655894   43974 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0818 19:31:38.655902   43974 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0818 19:31:38.655905   43974 command_runner.go:130] > #
	I0818 19:31:38.655911   43974 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0818 19:31:38.655916   43974 command_runner.go:130] > # feature.
	I0818 19:31:38.655919   43974 command_runner.go:130] > #
	I0818 19:31:38.655927   43974 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0818 19:31:38.655933   43974 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0818 19:31:38.655940   43974 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0818 19:31:38.655948   43974 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0818 19:31:38.655954   43974 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0818 19:31:38.655959   43974 command_runner.go:130] > #
	I0818 19:31:38.655965   43974 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0818 19:31:38.655973   43974 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0818 19:31:38.655977   43974 command_runner.go:130] > #
	I0818 19:31:38.655982   43974 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0818 19:31:38.655990   43974 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0818 19:31:38.655993   43974 command_runner.go:130] > #
	I0818 19:31:38.655998   43974 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0818 19:31:38.656006   43974 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0818 19:31:38.656011   43974 command_runner.go:130] > # limitation.
	I0818 19:31:38.656016   43974 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0818 19:31:38.656022   43974 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0818 19:31:38.656026   43974 command_runner.go:130] > runtime_type = "oci"
	I0818 19:31:38.656032   43974 command_runner.go:130] > runtime_root = "/run/runc"
	I0818 19:31:38.656036   43974 command_runner.go:130] > runtime_config_path = ""
	I0818 19:31:38.656043   43974 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0818 19:31:38.656047   43974 command_runner.go:130] > monitor_cgroup = "pod"
	I0818 19:31:38.656052   43974 command_runner.go:130] > monitor_exec_cgroup = ""
	I0818 19:31:38.656055   43974 command_runner.go:130] > monitor_env = [
	I0818 19:31:38.656061   43974 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0818 19:31:38.656065   43974 command_runner.go:130] > ]
	I0818 19:31:38.656070   43974 command_runner.go:130] > privileged_without_host_devices = false
	I0818 19:31:38.656078   43974 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0818 19:31:38.656084   43974 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0818 19:31:38.656090   43974 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0818 19:31:38.656099   43974 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0818 19:31:38.656109   43974 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0818 19:31:38.656117   43974 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0818 19:31:38.656130   43974 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0818 19:31:38.656139   43974 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0818 19:31:38.656145   43974 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0818 19:31:38.656152   43974 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0818 19:31:38.656155   43974 command_runner.go:130] > # Example:
	I0818 19:31:38.656160   43974 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0818 19:31:38.656164   43974 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0818 19:31:38.656168   43974 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0818 19:31:38.656173   43974 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0818 19:31:38.656176   43974 command_runner.go:130] > # cpuset = 0
	I0818 19:31:38.656180   43974 command_runner.go:130] > # cpushares = "0-1"
	I0818 19:31:38.656183   43974 command_runner.go:130] > # Where:
	I0818 19:31:38.656188   43974 command_runner.go:130] > # The workload name is workload-type.
	I0818 19:31:38.656194   43974 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0818 19:31:38.656199   43974 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0818 19:31:38.656204   43974 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0818 19:31:38.656212   43974 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0818 19:31:38.656217   43974 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0818 19:31:38.656221   43974 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0818 19:31:38.656227   43974 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0818 19:31:38.656231   43974 command_runner.go:130] > # Default value is set to true
	I0818 19:31:38.656235   43974 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0818 19:31:38.656240   43974 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0818 19:31:38.656244   43974 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0818 19:31:38.656248   43974 command_runner.go:130] > # Default value is set to 'false'
	I0818 19:31:38.656252   43974 command_runner.go:130] > # disable_hostport_mapping = false
	I0818 19:31:38.656260   43974 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0818 19:31:38.656264   43974 command_runner.go:130] > #
	I0818 19:31:38.656272   43974 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0818 19:31:38.656281   43974 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0818 19:31:38.656289   43974 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0818 19:31:38.656298   43974 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0818 19:31:38.656306   43974 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0818 19:31:38.656311   43974 command_runner.go:130] > [crio.image]
	I0818 19:31:38.656320   43974 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0818 19:31:38.656326   43974 command_runner.go:130] > # default_transport = "docker://"
	I0818 19:31:38.656336   43974 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0818 19:31:38.656348   43974 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0818 19:31:38.656356   43974 command_runner.go:130] > # global_auth_file = ""
	I0818 19:31:38.656362   43974 command_runner.go:130] > # The image used to instantiate infra containers.
	I0818 19:31:38.656372   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.656379   43974 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0818 19:31:38.656385   43974 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0818 19:31:38.656393   43974 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0818 19:31:38.656399   43974 command_runner.go:130] > # This option supports live configuration reload.
	I0818 19:31:38.656403   43974 command_runner.go:130] > # pause_image_auth_file = ""
	I0818 19:31:38.656411   43974 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0818 19:31:38.656418   43974 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0818 19:31:38.656425   43974 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0818 19:31:38.656435   43974 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0818 19:31:38.656442   43974 command_runner.go:130] > # pause_command = "/pause"
	I0818 19:31:38.656448   43974 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0818 19:31:38.656457   43974 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0818 19:31:38.656467   43974 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0818 19:31:38.656476   43974 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0818 19:31:38.656484   43974 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0818 19:31:38.656491   43974 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0818 19:31:38.656497   43974 command_runner.go:130] > # pinned_images = [
	I0818 19:31:38.656500   43974 command_runner.go:130] > # ]
	I0818 19:31:38.656508   43974 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0818 19:31:38.656515   43974 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0818 19:31:38.656523   43974 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0818 19:31:38.656531   43974 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0818 19:31:38.656539   43974 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0818 19:31:38.656543   43974 command_runner.go:130] > # signature_policy = ""
	I0818 19:31:38.656550   43974 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0818 19:31:38.656556   43974 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0818 19:31:38.656564   43974 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0818 19:31:38.656570   43974 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0818 19:31:38.656578   43974 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0818 19:31:38.656582   43974 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0818 19:31:38.656590   43974 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0818 19:31:38.656597   43974 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0818 19:31:38.656603   43974 command_runner.go:130] > # changing them here.
	I0818 19:31:38.656607   43974 command_runner.go:130] > # insecure_registries = [
	I0818 19:31:38.656612   43974 command_runner.go:130] > # ]
	I0818 19:31:38.656618   43974 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0818 19:31:38.656625   43974 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0818 19:31:38.656630   43974 command_runner.go:130] > # image_volumes = "mkdir"
	I0818 19:31:38.656637   43974 command_runner.go:130] > # Temporary directory to use for storing big files
	I0818 19:31:38.656641   43974 command_runner.go:130] > # big_files_temporary_dir = ""
	I0818 19:31:38.656647   43974 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0818 19:31:38.656653   43974 command_runner.go:130] > # CNI plugins.
	I0818 19:31:38.656657   43974 command_runner.go:130] > [crio.network]
	I0818 19:31:38.656664   43974 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0818 19:31:38.656669   43974 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0818 19:31:38.656675   43974 command_runner.go:130] > # cni_default_network = ""
	I0818 19:31:38.656681   43974 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0818 19:31:38.656688   43974 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0818 19:31:38.656694   43974 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0818 19:31:38.656702   43974 command_runner.go:130] > # plugin_dirs = [
	I0818 19:31:38.656709   43974 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0818 19:31:38.656713   43974 command_runner.go:130] > # ]
	I0818 19:31:38.656721   43974 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0818 19:31:38.656726   43974 command_runner.go:130] > [crio.metrics]
	I0818 19:31:38.656731   43974 command_runner.go:130] > # Globally enable or disable metrics support.
	I0818 19:31:38.656737   43974 command_runner.go:130] > enable_metrics = true
	I0818 19:31:38.656742   43974 command_runner.go:130] > # Specify enabled metrics collectors.
	I0818 19:31:38.656749   43974 command_runner.go:130] > # Per default all metrics are enabled.
	I0818 19:31:38.656754   43974 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0818 19:31:38.656762   43974 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0818 19:31:38.656770   43974 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0818 19:31:38.656774   43974 command_runner.go:130] > # metrics_collectors = [
	I0818 19:31:38.656781   43974 command_runner.go:130] > # 	"operations",
	I0818 19:31:38.656787   43974 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0818 19:31:38.656795   43974 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0818 19:31:38.656801   43974 command_runner.go:130] > # 	"operations_errors",
	I0818 19:31:38.656805   43974 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0818 19:31:38.656812   43974 command_runner.go:130] > # 	"image_pulls_by_name",
	I0818 19:31:38.656816   43974 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0818 19:31:38.656822   43974 command_runner.go:130] > # 	"image_pulls_failures",
	I0818 19:31:38.656827   43974 command_runner.go:130] > # 	"image_pulls_successes",
	I0818 19:31:38.656833   43974 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0818 19:31:38.656837   43974 command_runner.go:130] > # 	"image_layer_reuse",
	I0818 19:31:38.656841   43974 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0818 19:31:38.656848   43974 command_runner.go:130] > # 	"containers_oom_total",
	I0818 19:31:38.656852   43974 command_runner.go:130] > # 	"containers_oom",
	I0818 19:31:38.656858   43974 command_runner.go:130] > # 	"processes_defunct",
	I0818 19:31:38.656861   43974 command_runner.go:130] > # 	"operations_total",
	I0818 19:31:38.656868   43974 command_runner.go:130] > # 	"operations_latency_seconds",
	I0818 19:31:38.656874   43974 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0818 19:31:38.656880   43974 command_runner.go:130] > # 	"operations_errors_total",
	I0818 19:31:38.656884   43974 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0818 19:31:38.656890   43974 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0818 19:31:38.656895   43974 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0818 19:31:38.656902   43974 command_runner.go:130] > # 	"image_pulls_success_total",
	I0818 19:31:38.656906   43974 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0818 19:31:38.656913   43974 command_runner.go:130] > # 	"containers_oom_count_total",
	I0818 19:31:38.656918   43974 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0818 19:31:38.656924   43974 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0818 19:31:38.656927   43974 command_runner.go:130] > # ]
	I0818 19:31:38.656932   43974 command_runner.go:130] > # The port on which the metrics server will listen.
	I0818 19:31:38.656938   43974 command_runner.go:130] > # metrics_port = 9090
	I0818 19:31:38.656943   43974 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0818 19:31:38.656949   43974 command_runner.go:130] > # metrics_socket = ""
	I0818 19:31:38.656954   43974 command_runner.go:130] > # The certificate for the secure metrics server.
	I0818 19:31:38.656962   43974 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0818 19:31:38.656968   43974 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0818 19:31:38.656975   43974 command_runner.go:130] > # certificate on any modification event.
	I0818 19:31:38.656979   43974 command_runner.go:130] > # metrics_cert = ""
	I0818 19:31:38.656986   43974 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0818 19:31:38.656991   43974 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0818 19:31:38.656997   43974 command_runner.go:130] > # metrics_key = ""
	I0818 19:31:38.657003   43974 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0818 19:31:38.657009   43974 command_runner.go:130] > [crio.tracing]
	I0818 19:31:38.657014   43974 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0818 19:31:38.657019   43974 command_runner.go:130] > # enable_tracing = false
	I0818 19:31:38.657024   43974 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0818 19:31:38.657031   43974 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0818 19:31:38.657037   43974 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0818 19:31:38.657043   43974 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0818 19:31:38.657048   43974 command_runner.go:130] > # CRI-O NRI configuration.
	I0818 19:31:38.657055   43974 command_runner.go:130] > [crio.nri]
	I0818 19:31:38.657059   43974 command_runner.go:130] > # Globally enable or disable NRI.
	I0818 19:31:38.657064   43974 command_runner.go:130] > # enable_nri = false
	I0818 19:31:38.657068   43974 command_runner.go:130] > # NRI socket to listen on.
	I0818 19:31:38.657073   43974 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0818 19:31:38.657077   43974 command_runner.go:130] > # NRI plugin directory to use.
	I0818 19:31:38.657084   43974 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0818 19:31:38.657089   43974 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0818 19:31:38.657096   43974 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0818 19:31:38.657101   43974 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0818 19:31:38.657108   43974 command_runner.go:130] > # nri_disable_connections = false
	I0818 19:31:38.657113   43974 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0818 19:31:38.657120   43974 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0818 19:31:38.657127   43974 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0818 19:31:38.657134   43974 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0818 19:31:38.657140   43974 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0818 19:31:38.657146   43974 command_runner.go:130] > [crio.stats]
	I0818 19:31:38.657151   43974 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0818 19:31:38.657159   43974 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0818 19:31:38.657163   43974 command_runner.go:130] > # stats_collection_period = 0
	I0818 19:31:38.657273   43974 cni.go:84] Creating CNI manager for ""
	I0818 19:31:38.657288   43974 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0818 19:31:38.657297   43974 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 19:31:38.657331   43974 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.185 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-048993 NodeName:multinode-048993 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.185"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.185 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 19:31:38.657485   43974 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.185
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-048993"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.185
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.185"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 19:31:38.657547   43974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 19:31:38.668623   43974 command_runner.go:130] > kubeadm
	I0818 19:31:38.668641   43974 command_runner.go:130] > kubectl
	I0818 19:31:38.668649   43974 command_runner.go:130] > kubelet
	I0818 19:31:38.668739   43974 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 19:31:38.668806   43974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 19:31:38.678682   43974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0818 19:31:38.696241   43974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 19:31:38.713731   43974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0818 19:31:38.731612   43974 ssh_runner.go:195] Run: grep 192.168.39.185	control-plane.minikube.internal$ /etc/hosts
	I0818 19:31:38.735631   43974 command_runner.go:130] > 192.168.39.185	control-plane.minikube.internal
	I0818 19:31:38.735703   43974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:31:38.873782   43974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:31:38.888741   43974 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993 for IP: 192.168.39.185
	I0818 19:31:38.888769   43974 certs.go:194] generating shared ca certs ...
	I0818 19:31:38.888795   43974 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:31:38.888987   43974 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 19:31:38.889032   43974 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 19:31:38.889042   43974 certs.go:256] generating profile certs ...
	I0818 19:31:38.889119   43974 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/client.key
	I0818 19:31:38.889174   43974 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.key.9dd43d17
	I0818 19:31:38.889214   43974 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.key
	I0818 19:31:38.889225   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0818 19:31:38.889236   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0818 19:31:38.889249   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0818 19:31:38.889261   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0818 19:31:38.889277   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0818 19:31:38.889297   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0818 19:31:38.889316   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0818 19:31:38.889334   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0818 19:31:38.889403   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 19:31:38.889434   43974 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 19:31:38.889443   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 19:31:38.889472   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 19:31:38.889501   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 19:31:38.889526   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 19:31:38.889562   43974 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:31:38.889588   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:38.889601   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem -> /usr/share/ca-certificates/14934.pem
	I0818 19:31:38.889614   43974 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> /usr/share/ca-certificates/149342.pem
	I0818 19:31:38.890202   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 19:31:38.915097   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 19:31:38.939476   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 19:31:38.963251   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 19:31:38.986443   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 19:31:39.011286   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 19:31:39.036771   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 19:31:39.060748   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/multinode-048993/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 19:31:39.084324   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 19:31:39.107771   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 19:31:39.132562   43974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 19:31:39.156167   43974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 19:31:39.172870   43974 ssh_runner.go:195] Run: openssl version
	I0818 19:31:39.178672   43974 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0818 19:31:39.178746   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 19:31:39.189134   43974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:39.193788   43974 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:39.193849   43974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:39.193892   43974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:31:39.199526   43974 command_runner.go:130] > b5213941
	I0818 19:31:39.199569   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 19:31:39.208825   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 19:31:39.219688   43974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 19:31:39.224199   43974 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:31:39.224225   43974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:31:39.224259   43974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 19:31:39.230088   43974 command_runner.go:130] > 51391683
	I0818 19:31:39.230134   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 19:31:39.239929   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 19:31:39.251315   43974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 19:31:39.256111   43974 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:31:39.256334   43974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:31:39.256381   43974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 19:31:39.262349   43974 command_runner.go:130] > 3ec20f2e
	I0818 19:31:39.262421   43974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 19:31:39.272220   43974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:31:39.276871   43974 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:31:39.276892   43974 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0818 19:31:39.276898   43974 command_runner.go:130] > Device: 253,1	Inode: 532758      Links: 1
	I0818 19:31:39.276906   43974 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0818 19:31:39.276914   43974 command_runner.go:130] > Access: 2024-08-18 19:24:49.477711638 +0000
	I0818 19:31:39.276919   43974 command_runner.go:130] > Modify: 2024-08-18 19:24:49.477711638 +0000
	I0818 19:31:39.276924   43974 command_runner.go:130] > Change: 2024-08-18 19:24:49.477711638 +0000
	I0818 19:31:39.276929   43974 command_runner.go:130] >  Birth: 2024-08-18 19:24:49.477711638 +0000
	I0818 19:31:39.277001   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 19:31:39.282835   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.282901   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 19:31:39.290054   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.290122   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 19:31:39.295749   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.295812   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 19:31:39.301238   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.301300   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 19:31:39.306734   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.306797   43974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 19:31:39.312203   43974 command_runner.go:130] > Certificate will not expire
	I0818 19:31:39.312276   43974 kubeadm.go:392] StartCluster: {Name:multinode-048993 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-048993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.185 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.7 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:31:39.312413   43974 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 19:31:39.312471   43974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 19:31:39.353401   43974 command_runner.go:130] > 45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add
	I0818 19:31:39.353435   43974 command_runner.go:130] > 034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970
	I0818 19:31:39.353446   43974 command_runner.go:130] > ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09
	I0818 19:31:39.353456   43974 command_runner.go:130] > e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa
	I0818 19:31:39.353465   43974 command_runner.go:130] > 90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5
	I0818 19:31:39.353473   43974 command_runner.go:130] > eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121
	I0818 19:31:39.353482   43974 command_runner.go:130] > e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3
	I0818 19:31:39.353492   43974 command_runner.go:130] > a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56
	I0818 19:31:39.353523   43974 cri.go:89] found id: "45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add"
	I0818 19:31:39.353534   43974 cri.go:89] found id: "034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970"
	I0818 19:31:39.353540   43974 cri.go:89] found id: "ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09"
	I0818 19:31:39.353544   43974 cri.go:89] found id: "e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa"
	I0818 19:31:39.353549   43974 cri.go:89] found id: "90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5"
	I0818 19:31:39.353553   43974 cri.go:89] found id: "eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121"
	I0818 19:31:39.353558   43974 cri.go:89] found id: "e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3"
	I0818 19:31:39.353562   43974 cri.go:89] found id: "a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56"
	I0818 19:31:39.353566   43974 cri.go:89] found id: ""
	I0818 19:31:39.353605   43974 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.284545265Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-7frzh,Uid:9a575e7c-5ef9-468b-a917-ecdb76b22c63,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009538731714441,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:31:44.591804214Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-6sbml,Uid:c64b53bf-6c95-4f8b-abee-12a73b557ab9,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1724009504976513272,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:31:44.591809298Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&PodSandboxMetadata{Name:kindnet-x4z7j,Uid:1a272cb2-280a-42cb-a0b3-9c4292d1db39,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009504943416063,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map
[string]string{kubernetes.io/config.seen: 2024-08-18T19:31:44.591810877Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&PodSandboxMetadata{Name:kube-proxy-28dj6,Uid:d2949b15-f781-4283-a78e-190a50e61487,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009504932784805,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:31:44.591817431Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8c0b6bd1-9414-41ca-92a5-8737a3071582,Namespace:kube-system,Attempt:1,},State
:SANDBOX_READY,CreatedAt:1724009504916915002,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-18T19:31:44.591814534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-048993,Uid:251d787115b7540ccdaca898e5c46a2b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009501059749972,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: 251d787115b7540ccdaca898e5c46a2b,kubernetes.io/config.seen: 2024-08-18T19:31:40.585996993Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:77b81c5eb3b0d5d8f340754f580aeeb62b
19122d5d7e2fd3ec3ae516203e09a9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-048993,Uid:0bc4e515b3bcad171c5a2bf56de43ea6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009501058978921,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0bc4e515b3bcad171c5a2bf56de43ea6,kubernetes.io/config.seen: 2024-08-18T19:31:40.585998875Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-048993,Uid:05881ddcb619c86507c6e41c4b1fd421,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009501056975485,Labels:map[string]string{component: kube-controller-manager,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 05881ddcb619c86507c6e41c4b1fd421,kubernetes.io/config.seen: 2024-08-18T19:31:40.585998061Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&PodSandboxMetadata{Name:etcd-multinode-048993,Uid:679b5c8b5600f8bdcf4b592e6a912dc9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724009501056086419,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.185:2379,kuberne
tes.io/config.hash: 679b5c8b5600f8bdcf4b592e6a912dc9,kubernetes.io/config.seen: 2024-08-18T19:31:40.585993451Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-7frzh,Uid:9a575e7c-5ef9-468b-a917-ecdb76b22c63,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009171945036683,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:26:11.628736617Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8c0b6bd1-9414-41ca-92a5-8737a3071582,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1724009119133369436,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-18T19:25:18.821607172Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-6sbml,Uid:c64b53bf-6c95-4f8b-abee-12a73b557ab9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009119123600035,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:25:18.816743248Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&PodSandboxMetadata{Name:kube-proxy-28dj6,Uid:d2949b15-f781-4283-a78e-190a50e61487,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009104263528900,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:25:03.337855732Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&PodSandboxMetadata{Name:kindnet-x4z7j,Uid:1a272cb2-280a-42cb-a0b3-9c4292d1db39,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009103652594437,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:25:03.344872060Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-048993,Uid:251d787115b7540ccdaca898e5c46a2b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009092666004163,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.185:8443,kubernetes.io/config.hash: 251d787115b7540ccdaca898e5c46a2b,kubernetes.io/config.seen: 2024-08-18T19:24:52.187218372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:71dbd12c99bf66
408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-048993,Uid:0bc4e515b3bcad171c5a2bf56de43ea6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009092662553075,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0bc4e515b3bcad171c5a2bf56de43ea6,kubernetes.io/config.seen: 2024-08-18T19:24:52.187220659Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-048993,Uid:05881ddcb619c86507c6e41c4b1fd421,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009092645189191,Labels:map[string]string{component: kube-c
ontroller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 05881ddcb619c86507c6e41c4b1fd421,kubernetes.io/config.seen: 2024-08-18T19:24:52.187219753Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&PodSandboxMetadata{Name:etcd-multinode-048993,Uid:679b5c8b5600f8bdcf4b592e6a912dc9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724009092639745008,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.185:2379,kubernetes.io/config.hash: 679b5c8b5600f8bdcf4b592e6a912dc9,kubernetes.io/config.seen: 2024-08-18T19:24:52.187211601Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f50e4f74-2ef9-4193-aa29-fdb9f1b57cdb name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.285684004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b0d0d18-3bae-4243-9724-9245adf49dac name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.285757552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b0d0d18-3bae-4243-9724-9245adf49dac name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.286067716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc87dcd9aa9deddc7562fc238cbe336930df16f77335aec78802fd36fcf4f2c0,PodSandboxId:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724009538868344760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0,PodSandboxId:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724009505368994646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7,PodSandboxId:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724009505232833739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdcc7ac806c263132a374548fa08b957407b214c4c3e64e19f92a95f40533d,PodSandboxId:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724009505167584804,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6,PodSandboxId:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724009505139302914,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a,PodSandboxId:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724009501300203179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc,PodSandboxId:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724009501319068230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145,PodSandboxId:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724009501245805614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c,PodSandboxId:77b81c5eb3b0d5d8f340754f580aeeb62b19122d5d7e2fd3ec3ae516203e09a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724009501229041362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5316c792a79b718866fc191168399e5fb13a26819e2b646e0cf0f6b6557a6d62,PodSandboxId:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724009174507967338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add,PodSandboxId:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724009119340101142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970,PodSandboxId:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724009119281310809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09,PodSandboxId:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724009107657715318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa,PodSandboxId:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724009104355534243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5,PodSandboxId:71dbd12c99bf66408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724009092877705381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121,PodSandboxId:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724009092869904469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3,PodSandboxId:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724009092840215405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56,PodSandboxId:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724009092826804549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b0d0d18-3bae-4243-9724-9245adf49dac name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.325309462Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0869fbb0-ce75-441c-adc9-dff57d845d02 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.325404798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0869fbb0-ce75-441c-adc9-dff57d845d02 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.326933423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d833f7b3-a269-42b7-9154-afb69746ea17 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.327591798Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009745327502562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d833f7b3-a269-42b7-9154-afb69746ea17 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.328476762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d221973-e405-48a9-842d-aa3c67235523 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.328533802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d221973-e405-48a9-842d-aa3c67235523 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.328892953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc87dcd9aa9deddc7562fc238cbe336930df16f77335aec78802fd36fcf4f2c0,PodSandboxId:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724009538868344760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0,PodSandboxId:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724009505368994646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7,PodSandboxId:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724009505232833739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdcc7ac806c263132a374548fa08b957407b214c4c3e64e19f92a95f40533d,PodSandboxId:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724009505167584804,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6,PodSandboxId:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724009505139302914,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a,PodSandboxId:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724009501300203179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc,PodSandboxId:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724009501319068230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145,PodSandboxId:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724009501245805614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c,PodSandboxId:77b81c5eb3b0d5d8f340754f580aeeb62b19122d5d7e2fd3ec3ae516203e09a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724009501229041362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5316c792a79b718866fc191168399e5fb13a26819e2b646e0cf0f6b6557a6d62,PodSandboxId:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724009174507967338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add,PodSandboxId:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724009119340101142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970,PodSandboxId:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724009119281310809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09,PodSandboxId:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724009107657715318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa,PodSandboxId:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724009104355534243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5,PodSandboxId:71dbd12c99bf66408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724009092877705381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121,PodSandboxId:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724009092869904469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3,PodSandboxId:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724009092840215405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56,PodSandboxId:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724009092826804549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d221973-e405-48a9-842d-aa3c67235523 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.370730745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6e56ec7-cb7f-4807-8255-8da2f69ef83a name=/runtime.v1.RuntimeService/Version
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.370819551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6e56ec7-cb7f-4807-8255-8da2f69ef83a name=/runtime.v1.RuntimeService/Version
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.372072421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6259513b-4d6e-4999-bb1d-6c6c684f5b87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.372551705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009745372524512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6259513b-4d6e-4999-bb1d-6c6c684f5b87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.373360490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0783426-4ca5-4d18-b708-bfa9c0f05971 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.373556999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0783426-4ca5-4d18-b708-bfa9c0f05971 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.374349632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc87dcd9aa9deddc7562fc238cbe336930df16f77335aec78802fd36fcf4f2c0,PodSandboxId:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724009538868344760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0,PodSandboxId:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724009505368994646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7,PodSandboxId:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724009505232833739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdcc7ac806c263132a374548fa08b957407b214c4c3e64e19f92a95f40533d,PodSandboxId:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724009505167584804,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6,PodSandboxId:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724009505139302914,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a,PodSandboxId:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724009501300203179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc,PodSandboxId:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724009501319068230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145,PodSandboxId:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724009501245805614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c,PodSandboxId:77b81c5eb3b0d5d8f340754f580aeeb62b19122d5d7e2fd3ec3ae516203e09a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724009501229041362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5316c792a79b718866fc191168399e5fb13a26819e2b646e0cf0f6b6557a6d62,PodSandboxId:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724009174507967338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add,PodSandboxId:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724009119340101142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970,PodSandboxId:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724009119281310809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09,PodSandboxId:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724009107657715318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa,PodSandboxId:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724009104355534243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5,PodSandboxId:71dbd12c99bf66408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724009092877705381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121,PodSandboxId:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724009092869904469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3,PodSandboxId:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724009092840215405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56,PodSandboxId:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724009092826804549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0783426-4ca5-4d18-b708-bfa9c0f05971 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.415456198Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=026a6354-fd9e-42fa-84eb-76d30b917d6f name=/runtime.v1.RuntimeService/Version
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.415531156Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=026a6354-fd9e-42fa-84eb-76d30b917d6f name=/runtime.v1.RuntimeService/Version
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.417278088Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e5361c7-a5f0-45b5-8d5a-f1e4675bbaa0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.417712576Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009745417691521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e5361c7-a5f0-45b5-8d5a-f1e4675bbaa0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.418493839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e89133a-e636-4c68-9c10-87f9939bc5cc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.418571492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e89133a-e636-4c68-9c10-87f9939bc5cc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:35:45 multinode-048993 crio[2754]: time="2024-08-18 19:35:45.423642391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cc87dcd9aa9deddc7562fc238cbe336930df16f77335aec78802fd36fcf4f2c0,PodSandboxId:92b4b30080a97e9469dee8f37c9a17f23828ebced0d66aa1ad25879c15a71a04,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724009538868344760,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0,PodSandboxId:858d857765a3389106803d63e4f5b4efa8a4e9f233485ee0f1b46aba9115e83a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724009505368994646,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7,PodSandboxId:200c9423cba15336d39e4e6c82dfebbd8a08f36997df1d279585d2dde8f5caf8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724009505232833739,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9bdcc7ac806c263132a374548fa08b957407b214c4c3e64e19f92a95f40533d,PodSandboxId:660596f961b872119445b95dad8d4884150058c39d14dc669a75ad2dd8f43b87,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724009505167584804,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6,PodSandboxId:7518b32a59a9495f486088d1265c36742b8ab3eb7ec0e1951d83942dd2457461,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724009505139302914,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a,PodSandboxId:ae5a99e5e89a0a3e8a6ec73db448a00e01b39dff4ccc55e3f750f8b1673653e9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724009501300203179,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc,PodSandboxId:819c70ddf881bee71c65f7be229347baee8a0d90246dd76ba2da08f288b1a40b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724009501319068230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145,PodSandboxId:de5d60540061c54ea5cbf72d76d8ad8b879be8cbd41482f4236e1fabde4918fe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724009501245805614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c,PodSandboxId:77b81c5eb3b0d5d8f340754f580aeeb62b19122d5d7e2fd3ec3ae516203e09a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724009501229041362,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5316c792a79b718866fc191168399e5fb13a26819e2b646e0cf0f6b6557a6d62,PodSandboxId:feee816b0242be899db394f06114e540d1f63d11c78e3b94fd8ad1398a574f7e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724009174507967338,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-7frzh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9a575e7c-5ef9-468b-a917-ecdb76b22c63,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add,PodSandboxId:17994e6d5a09abb6a5ab0b71f4dc6eafb7a961b30928aeced669dc1faeb4f387,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724009119340101142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6sbml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c64b53bf-6c95-4f8b-abee-12a73b557ab9,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034596ea64fd32ce41c86385036bcc5cb1bc6839416b625392307560bcfa1970,PodSandboxId:a514572853159fb306a8a34ec7d0694152cb67790013f8c414ece931771a30dd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724009119281310809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 8c0b6bd1-9414-41ca-92a5-8737a3071582,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09,PodSandboxId:78e151ba3ed37d49c3e9d756cb39d01ab9498387a7865bbadb6e7e2dbedfc158,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724009107657715318,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4z7j,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 1a272cb2-280a-42cb-a0b3-9c4292d1db39,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa,PodSandboxId:b64158a3a56427c67b0a821142d687ce3df08d36fb70b8bee48dfa4b8c018769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724009104355534243,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-28dj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d2949b15-f781-4283-a78e-190a50e61487,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5,PodSandboxId:71dbd12c99bf66408886f2d2caadd463dd041d4928ca24a796d562bb25c75b30,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724009092877705381,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b
c4e515b3bcad171c5a2bf56de43ea6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121,PodSandboxId:f83c57db372e756502b5c37658710c70f5de891bfef2d7cf0ba53cb494525ce7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724009092869904469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 679b5c8b5600f8bdcf4b592e6a912dc9,},Annotations:
map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3,PodSandboxId:5e2bf7189a6d738e35265503372444983d06377c3783951ed514a2347a8d594f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724009092840215405,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 251d787115b7540ccdaca898e5c46a2b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56,PodSandboxId:888e9f9ce70d4b55f23172b0eea0e9f4c3d286c26537e30eb543247afd698dc5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724009092826804549,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-048993,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05881ddcb619c86507c6e41c4b1fd421,},Annotations:map
[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e89133a-e636-4c68-9c10-87f9939bc5cc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cc87dcd9aa9de       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   92b4b30080a97       busybox-7dff88458-7frzh
	e5524d7007d0d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   858d857765a33       kindnet-x4z7j
	4e11f83ab3834       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   200c9423cba15       coredns-6f6b679f8f-6sbml
	c9bdcc7ac806c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   660596f961b87       storage-provisioner
	eb6dcb819816d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   7518b32a59a94       kube-proxy-28dj6
	2af5b668c0fcb       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   819c70ddf881b       kube-apiserver-multinode-048993
	9d6e54ff4050c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   ae5a99e5e89a0       etcd-multinode-048993
	e4d8d775d3b05       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   de5d60540061c       kube-controller-manager-multinode-048993
	fdebc266be304       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   77b81c5eb3b0d       kube-scheduler-multinode-048993
	5316c792a79b7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   feee816b0242b       busybox-7dff88458-7frzh
	45a61d7ee2e26       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   17994e6d5a09a       coredns-6f6b679f8f-6sbml
	034596ea64fd3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   a514572853159       storage-provisioner
	ad82c36028623       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   78e151ba3ed37       kindnet-x4z7j
	e5faa6d0a7631       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   b64158a3a5642       kube-proxy-28dj6
	90d31f58d95ae       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   71dbd12c99bf6       kube-scheduler-multinode-048993
	eec00e2c5e7eb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   f83c57db372e7       etcd-multinode-048993
	e1d4611a4a993       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   5e2bf7189a6d7       kube-apiserver-multinode-048993
	a55d4b9fa2536       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   888e9f9ce70d4       kube-controller-manager-multinode-048993
	
	
	==> coredns [45a61d7ee2e268adb6056fbd26613f36a399fab0ebfd652f589ae4ccc3f74add] <==
	[INFO] 10.244.1.2:55890 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001763973s
	[INFO] 10.244.1.2:44367 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008542s
	[INFO] 10.244.1.2:46850 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000153428s
	[INFO] 10.244.1.2:48940 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001346329s
	[INFO] 10.244.1.2:37702 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086624s
	[INFO] 10.244.1.2:55482 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073943s
	[INFO] 10.244.1.2:48162 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107614s
	[INFO] 10.244.0.3:51710 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123557s
	[INFO] 10.244.0.3:36847 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062581s
	[INFO] 10.244.0.3:46175 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079334s
	[INFO] 10.244.0.3:47441 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007288s
	[INFO] 10.244.1.2:53518 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015543s
	[INFO] 10.244.1.2:50528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105074s
	[INFO] 10.244.1.2:55912 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006123s
	[INFO] 10.244.1.2:58978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080123s
	[INFO] 10.244.0.3:37306 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013604s
	[INFO] 10.244.0.3:50941 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014053s
	[INFO] 10.244.0.3:41496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088488s
	[INFO] 10.244.0.3:56717 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011318s
	[INFO] 10.244.1.2:54243 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150812s
	[INFO] 10.244.1.2:42566 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000111402s
	[INFO] 10.244.1.2:55877 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064247s
	[INFO] 10.244.1.2:59078 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000062466s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4e11f83ab38346d2dd1b067e32f23c513dde09166bc2a37f3c3e51be2303a2c7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:42879 - 37966 "HINFO IN 3929632951270858664.6796358618220062419. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009777619s
	
	
	==> describe nodes <==
	Name:               multinode-048993
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-048993
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=multinode-048993
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T19_24_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:24:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-048993
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:35:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:31:44 +0000   Sun, 18 Aug 2024 19:24:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:31:44 +0000   Sun, 18 Aug 2024 19:24:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:31:44 +0000   Sun, 18 Aug 2024 19:24:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:31:44 +0000   Sun, 18 Aug 2024 19:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.185
	  Hostname:    multinode-048993
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 dddb9e3f1ed4476a8ed6af277f7d2c4f
	  System UUID:                dddb9e3f-1ed4-476a-8ed6-af277f7d2c4f
	  Boot ID:                    1c5c5224-be60-4cf5-8851-63b45bb308bc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7frzh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 coredns-6f6b679f8f-6sbml                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-048993                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-x4z7j                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-048993             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-048993    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-28dj6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-048993             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-048993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-048993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-048993 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-048993 event: Registered Node multinode-048993 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-048993 status is now: NodeReady
	  Normal  Starting                 4m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node multinode-048993 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node multinode-048993 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node multinode-048993 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s                node-controller  Node multinode-048993 event: Registered Node multinode-048993 in Controller
	
	
	Name:               multinode-048993-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-048993-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=multinode-048993
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_18T19_32_21_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:32:21 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-048993-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:33:22 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 18 Aug 2024 19:32:51 +0000   Sun, 18 Aug 2024 19:34:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 18 Aug 2024 19:32:51 +0000   Sun, 18 Aug 2024 19:34:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 18 Aug 2024 19:32:51 +0000   Sun, 18 Aug 2024 19:34:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 18 Aug 2024 19:32:51 +0000   Sun, 18 Aug 2024 19:34:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.14
	  Hostname:    multinode-048993-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c34bd67fb2054565bffc7efd4e32bba6
	  System UUID:                c34bd67f-b205-4565-bffc-7efd4e32bba6
	  Boot ID:                    c4f23b01-b2a4-4e59-93a6-814d6593da13
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-4d24z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kindnet-gprqg              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m56s
	  kube-system                 kube-proxy-mvc7l           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m56s (x2 over 9m57s)  kubelet          Node multinode-048993-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m56s (x2 over 9m57s)  kubelet          Node multinode-048993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s (x2 over 9m57s)  kubelet          Node multinode-048993-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m36s                  kubelet          Node multinode-048993-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m25s)  kubelet          Node multinode-048993-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m25s)  kubelet          Node multinode-048993-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m25s)  kubelet          Node multinode-048993-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-048993-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-048993-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055516] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.171897] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.144371] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.303137] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.065184] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +3.593553] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.063740] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.996975] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
	[  +0.071088] kauditd_printk_skb: 69 callbacks suppressed
	[Aug18 19:25] systemd-fstab-generator[1331]: Ignoring "noauto" option for root device
	[  +1.198213] kauditd_printk_skb: 43 callbacks suppressed
	[ +15.491035] kauditd_printk_skb: 38 callbacks suppressed
	[Aug18 19:26] kauditd_printk_skb: 12 callbacks suppressed
	[Aug18 19:31] systemd-fstab-generator[2671]: Ignoring "noauto" option for root device
	[  +0.150910] systemd-fstab-generator[2683]: Ignoring "noauto" option for root device
	[  +0.176988] systemd-fstab-generator[2697]: Ignoring "noauto" option for root device
	[  +0.150127] systemd-fstab-generator[2710]: Ignoring "noauto" option for root device
	[  +0.260152] systemd-fstab-generator[2738]: Ignoring "noauto" option for root device
	[  +8.138798] systemd-fstab-generator[2839]: Ignoring "noauto" option for root device
	[  +0.082421] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.521406] systemd-fstab-generator[2959]: Ignoring "noauto" option for root device
	[  +4.663687] kauditd_printk_skb: 74 callbacks suppressed
	[  +8.319086] kauditd_printk_skb: 34 callbacks suppressed
	[  +3.071025] systemd-fstab-generator[3755]: Ignoring "noauto" option for root device
	[Aug18 19:32] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [9d6e54ff4050cefc7f11ea3ec622ee4f2f19298c5d923e95c7ecad9a241f201a] <==
	{"level":"info","ts":"2024-08-18T19:31:41.766429Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e1b99ad77146789e","local-member-id":"8fbc2df34e14192d","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:31:41.766482Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:31:41.764561Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:31:41.785063Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-08-18T19:31:41.787186Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-08-18T19:31:41.787238Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-18T19:31:41.792608Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8fbc2df34e14192d","initial-advertise-peer-urls":["https://192.168.39.185:2380"],"listen-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.185:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-18T19:31:41.792685Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T19:31:43.183219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-18T19:31:43.183334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-18T19:31:43.183385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgPreVoteResp from 8fbc2df34e14192d at term 2"}
	{"level":"info","ts":"2024-08-18T19:31:43.183429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:31:43.183453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d received MsgVoteResp from 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-08-18T19:31:43.183480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8fbc2df34e14192d became leader at term 3"}
	{"level":"info","ts":"2024-08-18T19:31:43.183506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8fbc2df34e14192d elected leader 8fbc2df34e14192d at term 3"}
	{"level":"info","ts":"2024-08-18T19:31:43.188766Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8fbc2df34e14192d","local-member-attributes":"{Name:multinode-048993 ClientURLs:[https://192.168.39.185:2379]}","request-path":"/0/members/8fbc2df34e14192d/attributes","cluster-id":"e1b99ad77146789e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T19:31:43.188871Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:31:43.188945Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T19:31:43.188988Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T19:31:43.189005Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:31:43.190244Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:31:43.191255Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T19:31:43.190312Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:31:43.192379Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.185:2379"}
	{"level":"info","ts":"2024-08-18T19:33:03.268773Z","caller":"traceutil/trace.go:171","msg":"trace[811109797] transaction","detail":"{read_only:false; response_revision:1131; number_of_response:1; }","duration":"164.047609ms","start":"2024-08-18T19:33:03.104682Z","end":"2024-08-18T19:33:03.268730Z","steps":["trace[811109797] 'process raft request'  (duration: 163.629939ms)"],"step_count":1}
	
	
	==> etcd [eec00e2c5e7eb238c344ba3ef555cb4c190c5e6b239a3b39090d0647732e5121] <==
	{"level":"info","ts":"2024-08-18T19:24:53.828288Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-08-18T19:25:49.098014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.573628ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814266096251320458 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-048993-m02.17ece92fca2ea282\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-048993-m02.17ece92fca2ea282\" value_size:646 lease:1814266096251319858 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-18T19:25:49.098215Z","caller":"traceutil/trace.go:171","msg":"trace[2076543562] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"229.420574ms","start":"2024-08-18T19:25:48.868781Z","end":"2024-08-18T19:25:49.098202Z","steps":["trace[2076543562] 'process raft request'  (duration: 69.164798ms)","trace[2076543562] 'compare'  (duration: 159.486895ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:25:54.204057Z","caller":"traceutil/trace.go:171","msg":"trace[1090596798] linearizableReadLoop","detail":"{readStateIndex:498; appliedIndex:497; }","duration":"148.49999ms","start":"2024-08-18T19:25:54.055542Z","end":"2024-08-18T19:25:54.204042Z","steps":["trace[1090596798] 'read index received'  (duration: 86.707504ms)","trace[1090596798] 'applied index is now lower than readState.Index'  (duration: 61.791606ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:25:54.204232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.690093ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-048993-m02\" ","response":"range_response_count:1 size:2886"}
	{"level":"info","ts":"2024-08-18T19:25:54.204255Z","caller":"traceutil/trace.go:171","msg":"trace[1149031216] range","detail":"{range_begin:/registry/minions/multinode-048993-m02; range_end:; response_count:1; response_revision:478; }","duration":"148.732415ms","start":"2024-08-18T19:25:54.055516Z","end":"2024-08-18T19:25:54.204248Z","steps":["trace[1149031216] 'agreement among raft nodes before linearized reading'  (duration: 148.599311ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:25:54.204422Z","caller":"traceutil/trace.go:171","msg":"trace[1153716909] transaction","detail":"{read_only:false; response_revision:478; number_of_response:1; }","duration":"278.404096ms","start":"2024-08-18T19:25:53.926008Z","end":"2024-08-18T19:25:54.204412Z","steps":["trace[1153716909] 'process raft request'  (duration: 216.33591ms)","trace[1153716909] 'compare'  (duration: 61.491141ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:25:54.637542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.350604ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1814266096251320547 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:459 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-18T19:25:54.637713Z","caller":"traceutil/trace.go:171","msg":"trace[1477434445] transaction","detail":"{read_only:false; response_revision:479; number_of_response:1; }","duration":"426.862549ms","start":"2024-08-18T19:25:54.210838Z","end":"2024-08-18T19:25:54.637701Z","steps":["trace[1477434445] 'process raft request'  (duration: 198.751703ms)","trace[1477434445] 'compare'  (duration: 227.20758ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:25:54.637785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:25:54.210822Z","time spent":"426.930462ms","remote":"127.0.0.1:55684","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2880,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:459 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2829 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2024-08-18T19:25:54.637896Z","caller":"traceutil/trace.go:171","msg":"trace[391627134] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"367.976841ms","start":"2024-08-18T19:25:54.269908Z","end":"2024-08-18T19:25:54.637884Z","steps":["trace[391627134] 'read index received'  (duration: 139.647608ms)","trace[391627134] 'applied index is now lower than readState.Index'  (duration: 228.328115ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:25:54.637994Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.080322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T19:25:54.638030Z","caller":"traceutil/trace.go:171","msg":"trace[470266829] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:479; }","duration":"368.122542ms","start":"2024-08-18T19:25:54.269902Z","end":"2024-08-18T19:25:54.638025Z","steps":["trace[470266829] 'agreement among raft nodes before linearized reading'  (duration: 368.062252ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:25:54.638065Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:25:54.269870Z","time spent":"368.189971ms","remote":"127.0.0.1:55154","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-08-18T19:26:43.084535Z","caller":"traceutil/trace.go:171","msg":"trace[541571492] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"202.155449ms","start":"2024-08-18T19:26:42.882341Z","end":"2024-08-18T19:26:43.084497Z","steps":["trace[541571492] 'process raft request'  (duration: 125.00485ms)","trace[541571492] 'compare'  (duration: 77.020137ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:29:58.687091Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-18T19:29:58.688595Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-048993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	{"level":"warn","ts":"2024-08-18T19:29:58.688696Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:29:58.688813Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:29:58.740982Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:29:58.741312Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.185:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:29:58.743225Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8fbc2df34e14192d","current-leader-member-id":"8fbc2df34e14192d"}
	{"level":"info","ts":"2024-08-18T19:29:58.748425Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-08-18T19:29:58.748573Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.185:2380"}
	{"level":"info","ts":"2024-08-18T19:29:58.748605Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-048993","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.185:2380"],"advertise-client-urls":["https://192.168.39.185:2379"]}
	
	
	==> kernel <==
	 19:35:45 up 11 min,  0 users,  load average: 0.18, 0.20, 0.12
	Linux multinode-048993 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ad82c360286238cf58abf15f7a2ede85f0cae5dd9786b40b3084c9ebcf857e09] <==
	I0818 19:29:08.708522       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:18.707426       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:29:18.707489       1 main.go:299] handling current node
	I0818 19:29:18.707509       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:29:18.707516       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:29:18.707686       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:29:18.707719       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:28.710002       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:29:28.710054       1 main.go:299] handling current node
	I0818 19:29:28.710070       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:29:28.710076       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:29:28.710248       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:29:28.710273       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:38.715120       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:29:38.715196       1 main.go:299] handling current node
	I0818 19:29:38.715213       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:29:38.715219       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:29:38.715387       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:29:38.715421       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:48.715424       1 main.go:295] Handling node with IPs: map[192.168.39.7:{}]
	I0818 19:29:48.715467       1 main.go:322] Node multinode-048993-m03 has CIDR [10.244.3.0/24] 
	I0818 19:29:48.715639       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:29:48.715665       1 main.go:299] handling current node
	I0818 19:29:48.715677       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:29:48.715682       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [e5524d7007d0d51b313a8636a69b113573b80f9a420b871898ee5fcfc12e92d0] <==
	I0818 19:34:36.430177       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:34:46.423519       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:34:46.423657       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:34:46.423807       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:34:46.423831       1 main.go:299] handling current node
	I0818 19:34:56.422853       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:34:56.423011       1 main.go:299] handling current node
	I0818 19:34:56.423045       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:34:56.423065       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:35:06.425248       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:35:06.425417       1 main.go:299] handling current node
	I0818 19:35:06.425471       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:35:06.425490       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:35:16.424774       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:35:16.424818       1 main.go:299] handling current node
	I0818 19:35:16.424832       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:35:16.424837       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:35:26.425589       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:35:26.425620       1 main.go:299] handling current node
	I0818 19:35:26.425632       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:35:26.425637       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	I0818 19:35:36.431410       1 main.go:295] Handling node with IPs: map[192.168.39.185:{}]
	I0818 19:35:36.431509       1 main.go:299] handling current node
	I0818 19:35:36.431538       1 main.go:295] Handling node with IPs: map[192.168.39.14:{}]
	I0818 19:35:36.431556       1 main.go:322] Node multinode-048993-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2af5b668c0fcb4601f4fa33aca7840ad0600bccc019ad5f52061cbf18e8666cc] <==
	I0818 19:31:44.548528       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 19:31:44.549356       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 19:31:44.549519       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0818 19:31:44.549548       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 19:31:44.549779       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0818 19:31:44.558795       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 19:31:44.561244       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 19:31:44.563051       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 19:31:44.563295       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:31:44.563358       1 policy_source.go:224] refreshing policies
	I0818 19:31:44.564859       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 19:31:44.564946       1 aggregator.go:171] initial CRD sync complete...
	I0818 19:31:44.565042       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 19:31:44.565064       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 19:31:44.565070       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:31:44.567306       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0818 19:31:44.581942       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0818 19:31:45.455672       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0818 19:31:46.740787       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0818 19:31:46.869586       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0818 19:31:46.884629       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0818 19:31:46.961372       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0818 19:31:46.972095       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0818 19:31:47.855353       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0818 19:31:48.148509       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e1d4611a4a9939d45c3a629ae08898eae278c2db47376b0467ce11679f2567f3] <==
	W0818 19:29:58.711540       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711650       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711707       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711756       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711791       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711825       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711876       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711914       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.711975       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712034       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712084       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712114       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712261       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712327       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712386       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712419       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712457       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712488       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712518       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.712698       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.713098       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.713746       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.714069       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.716648       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:29:58.716821       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a55d4b9fa2536f9dc3981230c732e76b55f82884401da8e4e5de5e8dfe3b2b56] <==
	I0818 19:27:31.908482       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:32.135803       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:27:32.135965       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:33.346887       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-048993-m03\" does not exist"
	I0818 19:27:33.347923       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:27:33.369639       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-048993-m03" podCIDRs=["10.244.3.0/24"]
	I0818 19:27:33.369891       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:33.370006       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:33.659644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:34.006439       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:37.553545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:43.511641       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:52.947572       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:27:52.947681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:52.963026       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:27:57.462640       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:28:32.478621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:28:32.479120       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m03"
	I0818 19:28:32.499053       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:28:32.510655       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.120692ms"
	I0818 19:28:32.511395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.86µs"
	I0818 19:28:37.530905       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:28:37.549920       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:28:37.554570       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:28:47.631015       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	
	
	==> kube-controller-manager [e4d8d775d3b05a2c370f35633ad79653ecb9a8b352b9561d52e430289641f145] <==
	I0818 19:33:00.054692       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-048993-m03\" does not exist"
	I0818 19:33:00.075367       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-048993-m03" podCIDRs=["10.244.2.0/24"]
	I0818 19:33:00.076388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:00.076544       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:00.476766       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:00.845264       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:02.967793       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:10.424852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:18.704729       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:33:18.704797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:18.718676       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:22.925342       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:23.566374       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:23.579502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:33:24.141115       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-048993-m02"
	I0818 19:33:24.141303       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m03"
	I0818 19:34:02.942415       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:34:02.957508       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	I0818 19:34:02.980398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.304623ms"
	I0818 19:34:02.980488       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.542µs"
	I0818 19:34:07.864513       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2kq2l"
	I0818 19:34:07.888502       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2kq2l"
	I0818 19:34:07.889352       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-dg95p"
	I0818 19:34:07.917027       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-dg95p"
	I0818 19:34:08.084560       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-048993-m02"
	
	
	==> kube-proxy [e5faa6d0a763150c0209384da33be32cde86237dc5a9cf46a3452d61b5e9ebfa] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:25:04.538381       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:25:04.554254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E0818 19:25:04.554469       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:25:04.586737       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:25:04.586825       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:25:04.586866       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:25:04.590088       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:25:04.590530       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:25:04.590686       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:25:04.592650       1 config.go:197] "Starting service config controller"
	I0818 19:25:04.592731       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:25:04.592774       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:25:04.592790       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:25:04.593683       1 config.go:326] "Starting node config controller"
	I0818 19:25:04.593721       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:25:04.693465       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:25:04.693589       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:25:04.693848       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [eb6dcb819816dc6fe8319792b6197b3a3c89211066e8d50cdde8050a5dd4ffb6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:31:45.506356       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:31:45.524805       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.185"]
	E0818 19:31:45.524864       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:31:45.577563       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:31:45.577624       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:31:45.577651       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:31:45.583778       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:31:45.584037       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:31:45.584068       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:31:45.585868       1 config.go:197] "Starting service config controller"
	I0818 19:31:45.585912       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:31:45.585940       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:31:45.585943       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:31:45.587320       1 config.go:326] "Starting node config controller"
	I0818 19:31:45.587348       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:31:45.686661       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:31:45.686837       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:31:45.687449       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [90d31f58d95aef412bc7bdee2c03f439d9865893af3ae4955a81806a66c221e5] <==
	E0818 19:24:56.337949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.341526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 19:24:56.341573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.415417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 19:24:56.415466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.446698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 19:24:56.446862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.529984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 19:24:56.530040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.638435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 19:24:56.638490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.642234       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 19:24:56.642277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.663428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:24:56.663461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.723613       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 19:24:56.723686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.768549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 19:24:56.768907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:24:56.850474       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 19:24:56.850633       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0818 19:24:59.713393       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:29:58.680876       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0818 19:29:58.681746       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0818 19:29:58.682242       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fdebc266be3045f3db45a8108c5830bcc6de81c6e60fc3c88e470f62fef5e16c] <==
	I0818 19:31:42.272617       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:31:44.488403       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 19:31:44.488539       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 19:31:44.488569       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:31:44.488647       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:31:44.556559       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:31:44.556633       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:31:44.571051       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:31:44.571361       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:31:44.571422       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:31:44.571465       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:31:44.671861       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 19:34:30 multinode-048993 kubelet[2966]: E0818 19:34:30.733056    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009670732592668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:34:40 multinode-048993 kubelet[2966]: E0818 19:34:40.671297    2966 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:34:40 multinode-048993 kubelet[2966]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:34:40 multinode-048993 kubelet[2966]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:34:40 multinode-048993 kubelet[2966]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:34:40 multinode-048993 kubelet[2966]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:34:40 multinode-048993 kubelet[2966]: E0818 19:34:40.736107    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009680735622249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:34:40 multinode-048993 kubelet[2966]: E0818 19:34:40.736558    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009680735622249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:34:50 multinode-048993 kubelet[2966]: E0818 19:34:50.739379    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009690738725969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:34:50 multinode-048993 kubelet[2966]: E0818 19:34:50.739714    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009690738725969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:00 multinode-048993 kubelet[2966]: E0818 19:35:00.741032    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009700740663540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:00 multinode-048993 kubelet[2966]: E0818 19:35:00.741074    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009700740663540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:10 multinode-048993 kubelet[2966]: E0818 19:35:10.742751    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009710742508094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:10 multinode-048993 kubelet[2966]: E0818 19:35:10.742852    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009710742508094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:20 multinode-048993 kubelet[2966]: E0818 19:35:20.744467    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009720744022529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:20 multinode-048993 kubelet[2966]: E0818 19:35:20.745706    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009720744022529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:30 multinode-048993 kubelet[2966]: E0818 19:35:30.748853    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009730748232488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:30 multinode-048993 kubelet[2966]: E0818 19:35:30.748891    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009730748232488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:40 multinode-048993 kubelet[2966]: E0818 19:35:40.671281    2966 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 19:35:40 multinode-048993 kubelet[2966]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 19:35:40 multinode-048993 kubelet[2966]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 19:35:40 multinode-048993 kubelet[2966]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 19:35:40 multinode-048993 kubelet[2966]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 19:35:40 multinode-048993 kubelet[2966]: E0818 19:35:40.750480    2966 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009740750090050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:35:40 multinode-048993 kubelet[2966]: E0818 19:35:40.750505    2966 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724009740750090050,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:35:45.008504   45843 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-7747/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-048993 -n multinode-048993
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-048993 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.27s)

                                                
                                    
x
+
TestPreload (360.02s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-878950 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0818 19:41:27.087643   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:41:44.019099   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-878950 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m37.467982697s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-878950 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-878950 image pull gcr.io/k8s-minikube/busybox: (2.772519255s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-878950
E0818 19:44:26.648003   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-878950: exit status 82 (2m0.454305045s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-878950"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-878950 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-18 19:45:23.236059069 +0000 UTC m=+4033.628398330
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-878950 -n test-preload-878950
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-878950 -n test-preload-878950: exit status 3 (18.426792068s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:45:41.659703   49075 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	E0818 19:45:41.659723   49075 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-878950" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-878950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-878950
--- FAIL: TestPreload (360.02s)

                                                
                                    
x
+
TestKubernetesUpgrade (405.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-179876 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0818 19:51:44.018958   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-179876 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m53.194833397s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-179876] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-179876" primary control-plane node in "kubernetes-upgrade-179876" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:51:36.970367   55950 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:51:36.970487   55950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:51:36.970499   55950 out.go:358] Setting ErrFile to fd 2...
	I0818 19:51:36.970506   55950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:51:36.970754   55950 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:51:36.971510   55950 out.go:352] Setting JSON to false
	I0818 19:51:36.972699   55950 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5641,"bootTime":1724005056,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:51:36.972776   55950 start.go:139] virtualization: kvm guest
	I0818 19:51:36.975231   55950 out.go:177] * [kubernetes-upgrade-179876] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:51:36.976774   55950 notify.go:220] Checking for updates...
	I0818 19:51:36.976784   55950 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:51:36.978368   55950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:51:36.979631   55950 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:51:36.980771   55950 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:51:36.981927   55950 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:51:36.984019   55950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:51:36.985950   55950 config.go:182] Loaded profile config "cert-expiration-735899": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:51:36.986148   55950 config.go:182] Loaded profile config "cert-options-272048": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:51:36.986270   55950 config.go:182] Loaded profile config "pause-147100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:51:36.986385   55950 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:51:37.024152   55950 out.go:177] * Using the kvm2 driver based on user configuration
	I0818 19:51:37.025255   55950 start.go:297] selected driver: kvm2
	I0818 19:51:37.025268   55950 start.go:901] validating driver "kvm2" against <nil>
	I0818 19:51:37.025289   55950 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:51:37.026026   55950 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:51:37.026116   55950 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:51:37.041722   55950 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:51:37.041765   55950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 19:51:37.041973   55950 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 19:51:37.042001   55950 cni.go:84] Creating CNI manager for ""
	I0818 19:51:37.042008   55950 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:51:37.042018   55950 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 19:51:37.042062   55950 start.go:340] cluster config:
	{Name:kubernetes-upgrade-179876 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-179876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:51:37.042158   55950 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:51:37.043731   55950 out.go:177] * Starting "kubernetes-upgrade-179876" primary control-plane node in "kubernetes-upgrade-179876" cluster
	I0818 19:51:37.044654   55950 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 19:51:37.044692   55950 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0818 19:51:37.044706   55950 cache.go:56] Caching tarball of preloaded images
	I0818 19:51:37.044768   55950 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:51:37.044780   55950 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0818 19:51:37.044865   55950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/config.json ...
	I0818 19:51:37.044881   55950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/config.json: {Name:mkf9b8d2702a330d4d2943f7f0c4b225b9c8f971 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:51:37.045012   55950 start.go:360] acquireMachinesLock for kubernetes-upgrade-179876: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:52:01.349083   55950 start.go:364] duration metric: took 24.304044861s to acquireMachinesLock for "kubernetes-upgrade-179876"
	I0818 19:52:01.349170   55950 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-179876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-179876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 19:52:01.349274   55950 start.go:125] createHost starting for "" (driver="kvm2")
	I0818 19:52:01.351320   55950 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 19:52:01.351525   55950 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:52:01.351572   55950 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:52:01.372218   55950 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0818 19:52:01.372831   55950 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:52:01.373459   55950 main.go:141] libmachine: Using API Version  1
	I0818 19:52:01.373482   55950 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:52:01.373846   55950 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:52:01.374066   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetMachineName
	I0818 19:52:01.374281   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .DriverName
	I0818 19:52:01.374468   55950 start.go:159] libmachine.API.Create for "kubernetes-upgrade-179876" (driver="kvm2")
	I0818 19:52:01.374503   55950 client.go:168] LocalClient.Create starting
	I0818 19:52:01.374541   55950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 19:52:01.374583   55950 main.go:141] libmachine: Decoding PEM data...
	I0818 19:52:01.374600   55950 main.go:141] libmachine: Parsing certificate...
	I0818 19:52:01.374654   55950 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 19:52:01.374680   55950 main.go:141] libmachine: Decoding PEM data...
	I0818 19:52:01.374695   55950 main.go:141] libmachine: Parsing certificate...
	I0818 19:52:01.374718   55950 main.go:141] libmachine: Running pre-create checks...
	I0818 19:52:01.374730   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .PreCreateCheck
	I0818 19:52:01.375219   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetConfigRaw
	I0818 19:52:01.375793   55950 main.go:141] libmachine: Creating machine...
	I0818 19:52:01.375812   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .Create
	I0818 19:52:01.375978   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Creating KVM machine...
	I0818 19:52:01.377789   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found existing default KVM network
	I0818 19:52:01.379051   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:01.378848   56156 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c3:b2:16} reservation:<nil>}
	I0818 19:52:01.380355   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:01.380247   56156 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:40:80:be} reservation:<nil>}
	I0818 19:52:01.381812   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:01.381730   56156 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000308800}
	I0818 19:52:01.381860   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | created network xml: 
	I0818 19:52:01.381886   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | <network>
	I0818 19:52:01.381903   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG |   <name>mk-kubernetes-upgrade-179876</name>
	I0818 19:52:01.381924   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG |   <dns enable='no'/>
	I0818 19:52:01.381935   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG |   
	I0818 19:52:01.381946   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0818 19:52:01.381961   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG |     <dhcp>
	I0818 19:52:01.382030   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0818 19:52:01.382054   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG |     </dhcp>
	I0818 19:52:01.382068   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG |   </ip>
	I0818 19:52:01.382079   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG |   
	I0818 19:52:01.382099   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | </network>
	I0818 19:52:01.382117   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | 
	I0818 19:52:01.387370   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | trying to create private KVM network mk-kubernetes-upgrade-179876 192.168.61.0/24...
	I0818 19:52:01.461400   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | private KVM network mk-kubernetes-upgrade-179876 192.168.61.0/24 created
	I0818 19:52:01.461424   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876 ...
	I0818 19:52:01.461447   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:01.461383   56156 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:52:01.461460   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 19:52:01.461535   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 19:52:01.710948   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:01.710775   56156 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/id_rsa...
	I0818 19:52:01.895118   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:01.894988   56156 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/kubernetes-upgrade-179876.rawdisk...
	I0818 19:52:01.895149   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Writing magic tar header
	I0818 19:52:01.895163   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Writing SSH key tar header
	I0818 19:52:01.895175   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:01.895120   56156 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876 ...
	I0818 19:52:01.895311   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876 (perms=drwx------)
	I0818 19:52:01.895337   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 19:52:01.895351   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876
	I0818 19:52:01.895405   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 19:52:01.895421   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:52:01.895442   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 19:52:01.895455   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 19:52:01.895467   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 19:52:01.895478   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 19:52:01.895491   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Creating domain...
	I0818 19:52:01.895522   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 19:52:01.895546   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 19:52:01.895569   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Checking permissions on dir: /home/jenkins
	I0818 19:52:01.895582   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Checking permissions on dir: /home
	I0818 19:52:01.895595   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Skipping /home - not owner
	I0818 19:52:01.896701   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) define libvirt domain using xml: 
	I0818 19:52:01.896722   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) <domain type='kvm'>
	I0818 19:52:01.896732   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   <name>kubernetes-upgrade-179876</name>
	I0818 19:52:01.896741   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   <memory unit='MiB'>2200</memory>
	I0818 19:52:01.896749   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   <vcpu>2</vcpu>
	I0818 19:52:01.896757   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   <features>
	I0818 19:52:01.896765   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <acpi/>
	I0818 19:52:01.896777   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <apic/>
	I0818 19:52:01.896782   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <pae/>
	I0818 19:52:01.896790   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     
	I0818 19:52:01.896796   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   </features>
	I0818 19:52:01.896804   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   <cpu mode='host-passthrough'>
	I0818 19:52:01.896834   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   
	I0818 19:52:01.896853   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   </cpu>
	I0818 19:52:01.896881   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   <os>
	I0818 19:52:01.896893   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <type>hvm</type>
	I0818 19:52:01.896905   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <boot dev='cdrom'/>
	I0818 19:52:01.896915   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <boot dev='hd'/>
	I0818 19:52:01.896924   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <bootmenu enable='no'/>
	I0818 19:52:01.896942   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   </os>
	I0818 19:52:01.896953   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   <devices>
	I0818 19:52:01.896962   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <disk type='file' device='cdrom'>
	I0818 19:52:01.896974   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/boot2docker.iso'/>
	I0818 19:52:01.896981   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <target dev='hdc' bus='scsi'/>
	I0818 19:52:01.896987   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <readonly/>
	I0818 19:52:01.896994   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     </disk>
	I0818 19:52:01.897004   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <disk type='file' device='disk'>
	I0818 19:52:01.897021   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 19:52:01.897040   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/kubernetes-upgrade-179876.rawdisk'/>
	I0818 19:52:01.897051   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <target dev='hda' bus='virtio'/>
	I0818 19:52:01.897062   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     </disk>
	I0818 19:52:01.897072   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <interface type='network'>
	I0818 19:52:01.897082   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <source network='mk-kubernetes-upgrade-179876'/>
	I0818 19:52:01.897096   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <model type='virtio'/>
	I0818 19:52:01.897133   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     </interface>
	I0818 19:52:01.897155   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <interface type='network'>
	I0818 19:52:01.897166   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <source network='default'/>
	I0818 19:52:01.897177   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <model type='virtio'/>
	I0818 19:52:01.897186   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     </interface>
	I0818 19:52:01.897202   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <serial type='pty'>
	I0818 19:52:01.897215   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <target port='0'/>
	I0818 19:52:01.897226   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     </serial>
	I0818 19:52:01.897235   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <console type='pty'>
	I0818 19:52:01.897245   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <target type='serial' port='0'/>
	I0818 19:52:01.897257   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     </console>
	I0818 19:52:01.897273   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     <rng model='virtio'>
	I0818 19:52:01.897288   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)       <backend model='random'>/dev/random</backend>
	I0818 19:52:01.897299   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     </rng>
	I0818 19:52:01.897311   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     
	I0818 19:52:01.897322   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)     
	I0818 19:52:01.897335   55950 main.go:141] libmachine: (kubernetes-upgrade-179876)   </devices>
	I0818 19:52:01.897343   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) </domain>
	I0818 19:52:01.897380   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) 
	I0818 19:52:01.901674   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:ed:7b:ea in network default
	I0818 19:52:01.902316   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Ensuring networks are active...
	I0818 19:52:01.902340   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:01.903142   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Ensuring network default is active
	I0818 19:52:01.903587   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Ensuring network mk-kubernetes-upgrade-179876 is active
	I0818 19:52:01.904135   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Getting domain xml...
	I0818 19:52:01.904986   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Creating domain...
	I0818 19:52:03.263173   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Waiting to get IP...
	I0818 19:52:03.264359   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:03.265270   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:03.265304   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:03.265244   56156 retry.go:31] will retry after 278.40474ms: waiting for machine to come up
	I0818 19:52:03.545782   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:03.546458   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:03.546489   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:03.546395   56156 retry.go:31] will retry after 305.297275ms: waiting for machine to come up
	I0818 19:52:03.853957   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:03.854456   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:03.854501   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:03.854419   56156 retry.go:31] will retry after 488.232342ms: waiting for machine to come up
	I0818 19:52:04.344258   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:04.344898   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:04.344923   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:04.344869   56156 retry.go:31] will retry after 481.596938ms: waiting for machine to come up
	I0818 19:52:04.828463   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:04.828901   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:04.828954   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:04.828893   56156 retry.go:31] will retry after 591.308235ms: waiting for machine to come up
	I0818 19:52:05.421460   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:05.421925   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:05.421958   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:05.421872   56156 retry.go:31] will retry after 935.430889ms: waiting for machine to come up
	I0818 19:52:06.358759   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:06.359198   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:06.359227   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:06.359152   56156 retry.go:31] will retry after 826.281847ms: waiting for machine to come up
	I0818 19:52:07.186670   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:07.187252   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:07.187282   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:07.187188   56156 retry.go:31] will retry after 967.690769ms: waiting for machine to come up
	I0818 19:52:08.156360   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:08.156880   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:08.156926   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:08.156849   56156 retry.go:31] will retry after 1.152023564s: waiting for machine to come up
	I0818 19:52:09.311217   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:09.311681   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:09.311715   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:09.311632   56156 retry.go:31] will retry after 2.059029774s: waiting for machine to come up
	I0818 19:52:11.372517   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:11.373128   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:11.373158   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:11.373080   56156 retry.go:31] will retry after 2.253238445s: waiting for machine to come up
	I0818 19:52:13.628536   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:13.629015   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:13.629044   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:13.628965   56156 retry.go:31] will retry after 3.398370126s: waiting for machine to come up
	I0818 19:52:17.028742   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:17.029299   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:17.029331   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:17.029249   56156 retry.go:31] will retry after 3.196533717s: waiting for machine to come up
	I0818 19:52:20.228435   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:20.228839   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find current IP address of domain kubernetes-upgrade-179876 in network mk-kubernetes-upgrade-179876
	I0818 19:52:20.228867   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | I0818 19:52:20.228808   56156 retry.go:31] will retry after 3.526718212s: waiting for machine to come up
	I0818 19:52:23.757374   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:23.757837   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Found IP for machine: 192.168.61.147
	I0818 19:52:23.757861   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Reserving static IP address...
	I0818 19:52:23.757876   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has current primary IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:23.758177   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-179876", mac: "52:54:00:dd:04:0a", ip: "192.168.61.147"} in network mk-kubernetes-upgrade-179876
	I0818 19:52:23.836785   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Getting to WaitForSSH function...
	I0818 19:52:23.836824   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Reserved static IP address: 192.168.61.147
	I0818 19:52:23.836840   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Waiting for SSH to be available...
	I0818 19:52:23.840233   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:23.840716   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:23.840746   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:23.840974   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Using SSH client type: external
	I0818 19:52:23.841005   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/id_rsa (-rw-------)
	I0818 19:52:23.841048   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.147 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 19:52:23.841064   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | About to run SSH command:
	I0818 19:52:23.841101   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | exit 0
	I0818 19:52:23.963283   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | SSH cmd err, output: <nil>: 
	I0818 19:52:23.963581   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) KVM machine creation complete!
	I0818 19:52:23.963875   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetConfigRaw
	I0818 19:52:23.964397   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .DriverName
	I0818 19:52:23.964571   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .DriverName
	I0818 19:52:23.964700   55950 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 19:52:23.964716   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetState
	I0818 19:52:23.965934   55950 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 19:52:23.965945   55950 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 19:52:23.965951   55950 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 19:52:23.965957   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:23.968660   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:23.969039   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:23.969061   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:23.969238   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:23.969394   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:23.969554   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:23.969689   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:23.969907   55950 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:23.970165   55950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0818 19:52:23.970182   55950 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 19:52:24.070748   55950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:52:24.070788   55950 main.go:141] libmachine: Detecting the provisioner...
	I0818 19:52:24.070800   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:24.073320   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.073705   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:24.073733   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.073869   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:24.074114   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:24.074304   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:24.074455   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:24.074635   55950 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:24.074850   55950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0818 19:52:24.074866   55950 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 19:52:24.180644   55950 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 19:52:24.180733   55950 main.go:141] libmachine: found compatible host: buildroot
	I0818 19:52:24.180744   55950 main.go:141] libmachine: Provisioning with buildroot...
	I0818 19:52:24.180752   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetMachineName
	I0818 19:52:24.181118   55950 buildroot.go:166] provisioning hostname "kubernetes-upgrade-179876"
	I0818 19:52:24.181161   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetMachineName
	I0818 19:52:24.181355   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:24.184121   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.184549   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:24.184596   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.184704   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:24.184891   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:24.185032   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:24.185188   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:24.185387   55950 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:24.185603   55950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0818 19:52:24.185622   55950 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-179876 && echo "kubernetes-upgrade-179876" | sudo tee /etc/hostname
	I0818 19:52:24.307674   55950 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-179876
	
	I0818 19:52:24.307713   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:24.311217   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.311597   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:24.311633   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.311825   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:24.312041   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:24.312246   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:24.312394   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:24.312582   55950 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:24.312830   55950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0818 19:52:24.312857   55950 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-179876' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-179876/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-179876' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 19:52:24.428933   55950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:52:24.428966   55950 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 19:52:24.429030   55950 buildroot.go:174] setting up certificates
	I0818 19:52:24.429047   55950 provision.go:84] configureAuth start
	I0818 19:52:24.429066   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetMachineName
	I0818 19:52:24.429378   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetIP
	I0818 19:52:24.431708   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.432074   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:24.432110   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.432194   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:24.434513   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.434847   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:24.434893   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.435035   55950 provision.go:143] copyHostCerts
	I0818 19:52:24.435094   55950 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 19:52:24.435116   55950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:52:24.435181   55950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 19:52:24.435294   55950 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 19:52:24.435304   55950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:52:24.435336   55950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 19:52:24.435457   55950 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 19:52:24.435468   55950 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:52:24.435500   55950 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 19:52:24.435590   55950 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-179876 san=[127.0.0.1 192.168.61.147 kubernetes-upgrade-179876 localhost minikube]
	I0818 19:52:24.693679   55950 provision.go:177] copyRemoteCerts
	I0818 19:52:24.693734   55950 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 19:52:24.693764   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:24.696552   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.696855   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:24.696885   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.697025   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:24.697304   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:24.697485   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:24.697597   55950 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/id_rsa Username:docker}
	I0818 19:52:24.777923   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 19:52:24.802794   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0818 19:52:24.830156   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 19:52:24.854860   55950 provision.go:87] duration metric: took 425.796624ms to configureAuth
	I0818 19:52:24.854889   55950 buildroot.go:189] setting minikube options for container-runtime
	I0818 19:52:24.855065   55950 config.go:182] Loaded profile config "kubernetes-upgrade-179876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 19:52:24.855156   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:24.857640   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.858003   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:24.858032   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:24.858195   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:24.858415   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:24.858575   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:24.858700   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:24.858836   55950 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:24.858989   55950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0818 19:52:24.859003   55950 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 19:52:25.113281   55950 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 19:52:25.113308   55950 main.go:141] libmachine: Checking connection to Docker...
	I0818 19:52:25.113320   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetURL
	I0818 19:52:25.114606   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | Using libvirt version 6000000
	I0818 19:52:25.116996   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.117347   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:25.117372   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.117575   55950 main.go:141] libmachine: Docker is up and running!
	I0818 19:52:25.117589   55950 main.go:141] libmachine: Reticulating splines...
	I0818 19:52:25.117596   55950 client.go:171] duration metric: took 23.74308486s to LocalClient.Create
	I0818 19:52:25.117628   55950 start.go:167] duration metric: took 23.743178486s to libmachine.API.Create "kubernetes-upgrade-179876"
	I0818 19:52:25.117641   55950 start.go:293] postStartSetup for "kubernetes-upgrade-179876" (driver="kvm2")
	I0818 19:52:25.117655   55950 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 19:52:25.117678   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .DriverName
	I0818 19:52:25.117921   55950 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 19:52:25.117945   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:25.120140   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.120462   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:25.120502   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.120605   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:25.120773   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:25.120903   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:25.121025   55950 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/id_rsa Username:docker}
	I0818 19:52:25.197618   55950 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 19:52:25.202668   55950 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 19:52:25.202702   55950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 19:52:25.202762   55950 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 19:52:25.202828   55950 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 19:52:25.202914   55950 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 19:52:25.212808   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:52:25.237904   55950 start.go:296] duration metric: took 120.246767ms for postStartSetup
	I0818 19:52:25.237964   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetConfigRaw
	I0818 19:52:25.238568   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetIP
	I0818 19:52:25.241272   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.241612   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:25.241648   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.241883   55950 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/config.json ...
	I0818 19:52:25.242135   55950 start.go:128] duration metric: took 23.892847865s to createHost
	I0818 19:52:25.242169   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:25.244742   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.245151   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:25.245177   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.245321   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:25.245475   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:25.245638   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:25.245783   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:25.245965   55950 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:25.246145   55950 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.147 22 <nil> <nil>}
	I0818 19:52:25.246160   55950 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 19:52:25.348018   55950 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724010745.320250911
	
	I0818 19:52:25.348042   55950 fix.go:216] guest clock: 1724010745.320250911
	I0818 19:52:25.348052   55950 fix.go:229] Guest: 2024-08-18 19:52:25.320250911 +0000 UTC Remote: 2024-08-18 19:52:25.24215372 +0000 UTC m=+48.310739379 (delta=78.097191ms)
	I0818 19:52:25.348089   55950 fix.go:200] guest clock delta is within tolerance: 78.097191ms
	I0818 19:52:25.348094   55950 start.go:83] releasing machines lock for "kubernetes-upgrade-179876", held for 23.998972627s
	I0818 19:52:25.348116   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .DriverName
	I0818 19:52:25.348380   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetIP
	I0818 19:52:25.352319   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.352798   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:25.352824   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.352951   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .DriverName
	I0818 19:52:25.353517   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .DriverName
	I0818 19:52:25.353702   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .DriverName
	I0818 19:52:25.353771   55950 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 19:52:25.353821   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:25.353943   55950 ssh_runner.go:195] Run: cat /version.json
	I0818 19:52:25.353967   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHHostname
	I0818 19:52:25.356603   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.356765   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.356973   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:25.357000   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.357140   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:25.357244   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:25.357271   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:25.357326   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:25.357484   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:25.357495   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHPort
	I0818 19:52:25.357684   55950 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/id_rsa Username:docker}
	I0818 19:52:25.357743   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHKeyPath
	I0818 19:52:25.357861   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetSSHUsername
	I0818 19:52:25.358007   55950 sshutil.go:53] new ssh client: &{IP:192.168.61.147 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/kubernetes-upgrade-179876/id_rsa Username:docker}
	I0818 19:52:25.432453   55950 ssh_runner.go:195] Run: systemctl --version
	I0818 19:52:25.458350   55950 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 19:52:25.618911   55950 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 19:52:25.626635   55950 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 19:52:25.626701   55950 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 19:52:25.644098   55950 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 19:52:25.644122   55950 start.go:495] detecting cgroup driver to use...
	I0818 19:52:25.644188   55950 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 19:52:25.661236   55950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 19:52:25.675714   55950 docker.go:217] disabling cri-docker service (if available) ...
	I0818 19:52:25.675765   55950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 19:52:25.689754   55950 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 19:52:25.703452   55950 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 19:52:25.820833   55950 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 19:52:25.969293   55950 docker.go:233] disabling docker service ...
	I0818 19:52:25.969368   55950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 19:52:25.983800   55950 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 19:52:25.997572   55950 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 19:52:26.157444   55950 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 19:52:26.313836   55950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 19:52:26.327699   55950 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 19:52:26.346528   55950 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 19:52:26.346585   55950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:26.356626   55950 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 19:52:26.356685   55950 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:26.366690   55950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:26.376493   55950 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:26.386982   55950 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 19:52:26.397598   55950 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 19:52:26.407006   55950 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 19:52:26.407064   55950 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 19:52:26.420246   55950 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 19:52:26.429478   55950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:52:26.562548   55950 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 19:52:26.698143   55950 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 19:52:26.698227   55950 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 19:52:26.703315   55950 start.go:563] Will wait 60s for crictl version
	I0818 19:52:26.703372   55950 ssh_runner.go:195] Run: which crictl
	I0818 19:52:26.706972   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 19:52:26.744898   55950 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 19:52:26.744985   55950 ssh_runner.go:195] Run: crio --version
	I0818 19:52:26.778705   55950 ssh_runner.go:195] Run: crio --version
	I0818 19:52:26.808166   55950 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 19:52:26.809395   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetIP
	I0818 19:52:26.812228   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:26.812602   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:52:16 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:52:26.812661   55950 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:52:26.812836   55950 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 19:52:26.817019   55950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 19:52:26.829398   55950 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-179876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-179876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 19:52:26.829491   55950 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 19:52:26.829531   55950 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:52:26.860070   55950 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 19:52:26.860134   55950 ssh_runner.go:195] Run: which lz4
	I0818 19:52:26.864351   55950 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 19:52:26.868739   55950 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 19:52:26.868780   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 19:52:28.460535   55950 crio.go:462] duration metric: took 1.596223673s to copy over tarball
	I0818 19:52:28.460614   55950 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 19:52:30.961225   55950 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.500586947s)
	I0818 19:52:30.961255   55950 crio.go:469] duration metric: took 2.500694609s to extract the tarball
	I0818 19:52:30.961265   55950 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 19:52:31.004001   55950 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:52:31.047823   55950 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 19:52:31.047847   55950 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 19:52:31.047930   55950 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 19:52:31.047952   55950 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:52:31.047971   55950 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:52:31.047994   55950 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 19:52:31.048025   55950 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 19:52:31.048024   55950 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:52:31.048055   55950 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 19:52:31.047937   55950 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:52:31.049508   55950 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:52:31.049590   55950 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 19:52:31.049617   55950 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:52:31.049633   55950 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 19:52:31.049669   55950 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 19:52:31.049679   55950 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:52:31.049734   55950 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 19:52:31.049744   55950 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:52:31.222603   55950 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 19:52:31.260081   55950 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 19:52:31.260129   55950 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 19:52:31.260189   55950 ssh_runner.go:195] Run: which crictl
	I0818 19:52:31.264401   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 19:52:31.272571   55950 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:52:31.321639   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 19:52:31.325229   55950 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 19:52:31.325268   55950 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:52:31.325311   55950 ssh_runner.go:195] Run: which crictl
	I0818 19:52:31.364193   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:52:31.364489   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 19:52:31.404210   55950 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:52:31.409705   55950 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:52:31.423446   55950 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 19:52:31.423692   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:52:31.435079   55950 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:52:31.435702   55950 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 19:52:31.470292   55950 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 19:52:31.539010   55950 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 19:52:31.539059   55950 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:52:31.539125   55950 ssh_runner.go:195] Run: which crictl
	I0818 19:52:31.539139   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:52:31.539332   55950 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 19:52:31.539363   55950 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:52:31.539412   55950 ssh_runner.go:195] Run: which crictl
	I0818 19:52:31.564540   55950 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 19:52:31.564593   55950 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 19:52:31.564642   55950 ssh_runner.go:195] Run: which crictl
	I0818 19:52:31.564546   55950 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 19:52:31.564680   55950 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:52:31.564726   55950 ssh_runner.go:195] Run: which crictl
	I0818 19:52:31.601521   55950 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 19:52:31.601569   55950 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 19:52:31.601617   55950 ssh_runner.go:195] Run: which crictl
	I0818 19:52:31.613312   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:52:31.613403   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:52:31.613405   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 19:52:31.613361   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:52:31.613332   55950 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 19:52:31.613370   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 19:52:31.725475   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 19:52:31.725543   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 19:52:31.734518   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:52:31.734518   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:52:31.734601   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:52:31.809603   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 19:52:31.851019   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 19:52:31.851125   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:52:31.855107   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:52:31.881330   55950 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:52:31.894315   55950 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 19:52:31.984510   55950 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 19:52:31.984592   55950 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 19:52:31.984695   55950 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 19:52:31.984889   55950 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 19:52:32.018769   55950 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 19:52:32.170048   55950 cache_images.go:92] duration metric: took 1.122182505s to LoadCachedImages
	W0818 19:52:32.170159   55950 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0818 19:52:32.170193   55950 kubeadm.go:934] updating node { 192.168.61.147 8443 v1.20.0 crio true true} ...
	I0818 19:52:32.170331   55950 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-179876 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-179876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 19:52:32.170444   55950 ssh_runner.go:195] Run: crio config
	I0818 19:52:32.224441   55950 cni.go:84] Creating CNI manager for ""
	I0818 19:52:32.224465   55950 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:52:32.224477   55950 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 19:52:32.224502   55950 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.147 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-179876 NodeName:kubernetes-upgrade-179876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 19:52:32.224706   55950 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-179876"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 19:52:32.224770   55950 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 19:52:32.235641   55950 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 19:52:32.235718   55950 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 19:52:32.246202   55950 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0818 19:52:32.265711   55950 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 19:52:32.284897   55950 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0818 19:52:32.306900   55950 ssh_runner.go:195] Run: grep 192.168.61.147	control-plane.minikube.internal$ /etc/hosts
	I0818 19:52:32.312294   55950 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.147	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 19:52:32.328378   55950 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:52:32.489694   55950 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:52:32.511157   55950 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876 for IP: 192.168.61.147
	I0818 19:52:32.511185   55950 certs.go:194] generating shared ca certs ...
	I0818 19:52:32.511246   55950 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:52:32.511449   55950 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 19:52:32.511523   55950 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 19:52:32.511533   55950 certs.go:256] generating profile certs ...
	I0818 19:52:32.511605   55950 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/client.key
	I0818 19:52:32.511635   55950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/client.crt with IP's: []
	I0818 19:52:32.709854   55950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/client.crt ...
	I0818 19:52:32.709892   55950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/client.crt: {Name:mk2680c60560edc4ead98537261293b2197b7b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:52:32.780921   55950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/client.key ...
	I0818 19:52:32.780995   55950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/client.key: {Name:mk5256c8b290034f947fd7a3af6030e614f21914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:52:32.781201   55950 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.key.7c7b5bcd
	I0818 19:52:32.781225   55950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.crt.7c7b5bcd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.147]
	I0818 19:52:32.969869   55950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.crt.7c7b5bcd ...
	I0818 19:52:32.969894   55950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.crt.7c7b5bcd: {Name:mk91a0a4edaf33f6d526fa721f593670ab3db459 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:52:32.970043   55950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.key.7c7b5bcd ...
	I0818 19:52:32.970057   55950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.key.7c7b5bcd: {Name:mke986f432398f6d8c841949879cee9ab945230f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:52:32.970122   55950 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.crt.7c7b5bcd -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.crt
	I0818 19:52:32.970207   55950 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.key.7c7b5bcd -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.key
	I0818 19:52:32.970265   55950 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.key
	I0818 19:52:32.970279   55950 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.crt with IP's: []
	I0818 19:52:33.197578   55950 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.crt ...
	I0818 19:52:33.197622   55950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.crt: {Name:mkd9ad511bb155ad026500e0f184d86206bd2afd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:52:33.197812   55950 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.key ...
	I0818 19:52:33.197834   55950 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.key: {Name:mk51180757a96f7a5cdbc9b34091b329587eb61e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:52:33.198043   55950 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 19:52:33.198100   55950 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 19:52:33.198116   55950 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 19:52:33.198151   55950 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 19:52:33.198190   55950 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 19:52:33.198221   55950 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 19:52:33.198275   55950 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:52:33.198894   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 19:52:33.228970   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 19:52:33.257708   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 19:52:33.286864   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 19:52:33.313392   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0818 19:52:33.360952   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 19:52:33.389694   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 19:52:33.414587   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 19:52:33.438135   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 19:52:33.461272   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 19:52:33.485886   55950 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 19:52:33.510459   55950 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 19:52:33.529389   55950 ssh_runner.go:195] Run: openssl version
	I0818 19:52:33.535570   55950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 19:52:33.546749   55950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:52:33.551327   55950 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:52:33.551398   55950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:52:33.557527   55950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 19:52:33.568928   55950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 19:52:33.580601   55950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 19:52:33.585414   55950 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:52:33.585467   55950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 19:52:33.591639   55950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 19:52:33.602977   55950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 19:52:33.614051   55950 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 19:52:33.618850   55950 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:52:33.618901   55950 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 19:52:33.624850   55950 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 19:52:33.635705   55950 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:52:33.639884   55950 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 19:52:33.639934   55950 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-179876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-179876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:52:33.640020   55950 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 19:52:33.640073   55950 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 19:52:33.687562   55950 cri.go:89] found id: ""
	I0818 19:52:33.687646   55950 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 19:52:33.697604   55950 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 19:52:33.707258   55950 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 19:52:33.716560   55950 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 19:52:33.716587   55950 kubeadm.go:157] found existing configuration files:
	
	I0818 19:52:33.716645   55950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 19:52:33.725695   55950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 19:52:33.725766   55950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 19:52:33.735182   55950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 19:52:33.744147   55950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 19:52:33.744207   55950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 19:52:33.753561   55950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 19:52:33.762366   55950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 19:52:33.762430   55950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 19:52:33.772603   55950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 19:52:33.781577   55950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 19:52:33.781640   55950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 19:52:33.791073   55950 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 19:52:33.929414   55950 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 19:52:33.929656   55950 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 19:52:34.092530   55950 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 19:52:34.092718   55950 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 19:52:34.092847   55950 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 19:52:34.329385   55950 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 19:52:34.396740   55950 out.go:235]   - Generating certificates and keys ...
	I0818 19:52:34.396857   55950 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 19:52:34.396973   55950 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 19:52:34.412531   55950 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0818 19:52:34.549819   55950 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0818 19:52:34.748353   55950 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0818 19:52:34.855315   55950 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0818 19:52:35.044181   55950 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0818 19:52:35.044366   55950 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-179876 localhost] and IPs [192.168.61.147 127.0.0.1 ::1]
	I0818 19:52:35.179050   55950 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0818 19:52:35.179206   55950 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-179876 localhost] and IPs [192.168.61.147 127.0.0.1 ::1]
	I0818 19:52:35.416104   55950 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0818 19:52:35.609584   55950 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0818 19:52:35.731426   55950 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0818 19:52:35.731837   55950 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 19:52:35.873588   55950 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 19:52:36.084857   55950 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 19:52:36.307783   55950 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 19:52:36.559310   55950 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 19:52:36.578754   55950 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 19:52:36.580108   55950 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 19:52:36.580193   55950 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 19:52:36.735576   55950 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 19:52:36.737369   55950 out.go:235]   - Booting up control plane ...
	I0818 19:52:36.737557   55950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 19:52:36.752696   55950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 19:52:36.754147   55950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 19:52:36.755405   55950 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 19:52:36.760731   55950 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 19:53:16.752529   55950 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 19:53:16.753667   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:53:16.754035   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:53:21.754629   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:53:21.754900   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:53:31.754161   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:53:31.754439   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:53:51.754006   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:53:51.754220   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:54:31.755763   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:54:31.755985   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:54:31.755994   55950 kubeadm.go:310] 
	I0818 19:54:31.756044   55950 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 19:54:31.756103   55950 kubeadm.go:310] 		timed out waiting for the condition
	I0818 19:54:31.756135   55950 kubeadm.go:310] 
	I0818 19:54:31.756193   55950 kubeadm.go:310] 	This error is likely caused by:
	I0818 19:54:31.756252   55950 kubeadm.go:310] 		- The kubelet is not running
	I0818 19:54:31.756375   55950 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 19:54:31.756387   55950 kubeadm.go:310] 
	I0818 19:54:31.756519   55950 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 19:54:31.756576   55950 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 19:54:31.756622   55950 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 19:54:31.756628   55950 kubeadm.go:310] 
	I0818 19:54:31.756768   55950 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 19:54:31.756895   55950 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 19:54:31.756914   55950 kubeadm.go:310] 
	I0818 19:54:31.757056   55950 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 19:54:31.757185   55950 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 19:54:31.757307   55950 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 19:54:31.757416   55950 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 19:54:31.757429   55950 kubeadm.go:310] 
	I0818 19:54:31.757962   55950 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 19:54:31.758077   55950 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 19:54:31.758176   55950 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0818 19:54:31.758345   55950 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-179876 localhost] and IPs [192.168.61.147 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-179876 localhost] and IPs [192.168.61.147 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-179876 localhost] and IPs [192.168.61.147 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-179876 localhost] and IPs [192.168.61.147 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 19:54:31.758388   55950 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 19:54:32.989143   55950 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.230731456s)
	I0818 19:54:32.989228   55950 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:54:33.005256   55950 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 19:54:33.017233   55950 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 19:54:33.017253   55950 kubeadm.go:157] found existing configuration files:
	
	I0818 19:54:33.017306   55950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 19:54:33.033650   55950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 19:54:33.034057   55950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 19:54:33.058125   55950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 19:54:33.068720   55950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 19:54:33.068786   55950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 19:54:33.079453   55950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 19:54:33.090698   55950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 19:54:33.090751   55950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 19:54:33.101544   55950 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 19:54:33.113049   55950 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 19:54:33.113114   55950 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 19:54:33.126026   55950 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 19:54:33.199582   55950 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 19:54:33.199910   55950 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 19:54:33.344174   55950 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 19:54:33.344304   55950 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 19:54:33.344460   55950 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 19:54:33.544022   55950 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 19:54:33.546045   55950 out.go:235]   - Generating certificates and keys ...
	I0818 19:54:33.546153   55950 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 19:54:33.546242   55950 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 19:54:33.546346   55950 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 19:54:33.546426   55950 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 19:54:33.546520   55950 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 19:54:33.546592   55950 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 19:54:33.546663   55950 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 19:54:33.546745   55950 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 19:54:33.546843   55950 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 19:54:33.546940   55950 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 19:54:33.546989   55950 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 19:54:33.547065   55950 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 19:54:33.748027   55950 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 19:54:33.833155   55950 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 19:54:33.940038   55950 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 19:54:34.139137   55950 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 19:54:34.158558   55950 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 19:54:34.161875   55950 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 19:54:34.162193   55950 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 19:54:34.329906   55950 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 19:54:34.331804   55950 out.go:235]   - Booting up control plane ...
	I0818 19:54:34.331934   55950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 19:54:34.341111   55950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 19:54:34.343966   55950 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 19:54:34.344073   55950 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 19:54:34.350026   55950 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 19:55:14.351787   55950 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 19:55:14.351885   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:55:14.352092   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:55:19.352469   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:55:19.352762   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:55:29.353652   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:55:29.353901   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:55:49.352725   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:55:49.352900   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:56:29.352915   55950 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:56:29.353163   55950 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:56:29.353182   55950 kubeadm.go:310] 
	I0818 19:56:29.353245   55950 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 19:56:29.353306   55950 kubeadm.go:310] 		timed out waiting for the condition
	I0818 19:56:29.353319   55950 kubeadm.go:310] 
	I0818 19:56:29.353370   55950 kubeadm.go:310] 	This error is likely caused by:
	I0818 19:56:29.353415   55950 kubeadm.go:310] 		- The kubelet is not running
	I0818 19:56:29.353583   55950 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 19:56:29.353597   55950 kubeadm.go:310] 
	I0818 19:56:29.353714   55950 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 19:56:29.353759   55950 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 19:56:29.353792   55950 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 19:56:29.353799   55950 kubeadm.go:310] 
	I0818 19:56:29.353891   55950 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 19:56:29.353956   55950 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 19:56:29.353962   55950 kubeadm.go:310] 
	I0818 19:56:29.354050   55950 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 19:56:29.354133   55950 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 19:56:29.354201   55950 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 19:56:29.354265   55950 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 19:56:29.354273   55950 kubeadm.go:310] 
	I0818 19:56:29.354777   55950 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 19:56:29.354888   55950 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 19:56:29.354985   55950 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 19:56:29.355055   55950 kubeadm.go:394] duration metric: took 3m55.715126035s to StartCluster
	I0818 19:56:29.355099   55950 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 19:56:29.355151   55950 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 19:56:29.398246   55950 cri.go:89] found id: ""
	I0818 19:56:29.398296   55950 logs.go:276] 0 containers: []
	W0818 19:56:29.398308   55950 logs.go:278] No container was found matching "kube-apiserver"
	I0818 19:56:29.398320   55950 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 19:56:29.398392   55950 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 19:56:29.441633   55950 cri.go:89] found id: ""
	I0818 19:56:29.441663   55950 logs.go:276] 0 containers: []
	W0818 19:56:29.441673   55950 logs.go:278] No container was found matching "etcd"
	I0818 19:56:29.441680   55950 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 19:56:29.441757   55950 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 19:56:29.486945   55950 cri.go:89] found id: ""
	I0818 19:56:29.486974   55950 logs.go:276] 0 containers: []
	W0818 19:56:29.486985   55950 logs.go:278] No container was found matching "coredns"
	I0818 19:56:29.486992   55950 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 19:56:29.487059   55950 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 19:56:29.532293   55950 cri.go:89] found id: ""
	I0818 19:56:29.532324   55950 logs.go:276] 0 containers: []
	W0818 19:56:29.532334   55950 logs.go:278] No container was found matching "kube-scheduler"
	I0818 19:56:29.532341   55950 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 19:56:29.532402   55950 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 19:56:29.568768   55950 cri.go:89] found id: ""
	I0818 19:56:29.568800   55950 logs.go:276] 0 containers: []
	W0818 19:56:29.568808   55950 logs.go:278] No container was found matching "kube-proxy"
	I0818 19:56:29.568814   55950 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 19:56:29.568862   55950 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 19:56:29.625314   55950 cri.go:89] found id: ""
	I0818 19:56:29.625371   55950 logs.go:276] 0 containers: []
	W0818 19:56:29.625383   55950 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 19:56:29.625392   55950 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 19:56:29.625449   55950 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 19:56:29.688927   55950 cri.go:89] found id: ""
	I0818 19:56:29.688954   55950 logs.go:276] 0 containers: []
	W0818 19:56:29.688964   55950 logs.go:278] No container was found matching "kindnet"
	I0818 19:56:29.688978   55950 logs.go:123] Gathering logs for dmesg ...
	I0818 19:56:29.688994   55950 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 19:56:29.706024   55950 logs.go:123] Gathering logs for describe nodes ...
	I0818 19:56:29.706058   55950 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 19:56:29.865369   55950 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 19:56:29.865396   55950 logs.go:123] Gathering logs for CRI-O ...
	I0818 19:56:29.865411   55950 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 19:56:30.011779   55950 logs.go:123] Gathering logs for container status ...
	I0818 19:56:30.011818   55950 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 19:56:30.055896   55950 logs.go:123] Gathering logs for kubelet ...
	I0818 19:56:30.055930   55950 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 19:56:30.110392   55950 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 19:56:30.110461   55950 out.go:270] * 
	* 
	W0818 19:56:30.110532   55950 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 19:56:30.110551   55950 out.go:270] * 
	* 
	W0818 19:56:30.111551   55950 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 19:56:30.114634   55950 out.go:201] 
	W0818 19:56:30.115883   55950 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 19:56:30.115975   55950 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 19:56:30.116005   55950 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 19:56:30.117462   55950 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-179876 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-179876
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-179876: (2.3310735s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-179876 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-179876 status --format={{.Host}}: exit status 7 (100.595387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-179876 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0818 19:56:44.019092   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-179876 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m6.702997805s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-179876 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-179876 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-179876 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (79.535866ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-179876] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-179876
	    minikube start -p kubernetes-upgrade-179876 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1798762 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-179876 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-179876 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-179876 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.604814463s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-18 19:58:19.070275594 +0000 UTC m=+4809.462614889
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-179876 -n kubernetes-upgrade-179876
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-179876 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-179876 logs -n 25: (1.775524868s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609 sudo cat                | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609 sudo cat                | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609 sudo cat                | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-754609                         | enable-default-cni-754609 | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	| start   | -p old-k8s-version-247539                            | old-k8s-version-247539    | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-754609 pgrep -a                           | flannel-754609            | jenkins | v1.33.1 | 18 Aug 24 19:57 UTC | 18 Aug 24 19:57 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p bridge-754609 pgrep -a                            | bridge-754609             | jenkins | v1.33.1 | 18 Aug 24 19:58 UTC | 18 Aug 24 19:58 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-754609 sudo cat                           | flannel-754609            | jenkins | v1.33.1 | 18 Aug 24 19:58 UTC | 18 Aug 24 19:58 UTC |
	|         | /etc/nsswitch.conf                                   |                           |         |         |                     |                     |
	| ssh     | -p flannel-754609 sudo cat                           | flannel-754609            | jenkins | v1.33.1 | 18 Aug 24 19:58 UTC | 18 Aug 24 19:58 UTC |
	|         | /etc/hosts                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-754609 sudo cat                           | flannel-754609            | jenkins | v1.33.1 | 18 Aug 24 19:58 UTC | 18 Aug 24 19:58 UTC |
	|         | /etc/resolv.conf                                     |                           |         |         |                     |                     |
	| ssh     | -p flannel-754609 sudo crictl                        | flannel-754609            | jenkins | v1.33.1 | 18 Aug 24 19:58 UTC | 18 Aug 24 19:58 UTC |
	|         | pods                                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-754609 sudo crictl                        | flannel-754609            | jenkins | v1.33.1 | 18 Aug 24 19:58 UTC | 18 Aug 24 19:58 UTC |
	|         | ps --all                                             |                           |         |         |                     |                     |
	| ssh     | -p flannel-754609 sudo find                          | flannel-754609            | jenkins | v1.33.1 | 18 Aug 24 19:58 UTC | 18 Aug 24 19:58 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-754609 sudo ip a s                        | flannel-754609            | jenkins | v1.33.1 | 18 Aug 24 19:58 UTC |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 19:57:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 19:57:57.631272   67330 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:57:57.631374   67330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:57:57.631395   67330 out.go:358] Setting ErrFile to fd 2...
	I0818 19:57:57.631402   67330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:57:57.631599   67330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:57:57.632162   67330 out.go:352] Setting JSON to false
	I0818 19:57:57.633787   67330 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6022,"bootTime":1724005056,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:57:57.633927   67330 start.go:139] virtualization: kvm guest
	I0818 19:57:57.635982   67330 out.go:177] * [old-k8s-version-247539] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:57:57.638154   67330 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:57:57.638163   67330 notify.go:220] Checking for updates...
	I0818 19:57:57.639908   67330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:57:57.641396   67330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:57:57.642819   67330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:57:57.644128   67330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:57:57.645540   67330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:57:57.647796   67330 config.go:182] Loaded profile config "bridge-754609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:57:57.647907   67330 config.go:182] Loaded profile config "flannel-754609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:57:57.648002   67330 config.go:182] Loaded profile config "kubernetes-upgrade-179876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:57:57.648102   67330 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:57:57.689373   67330 out.go:177] * Using the kvm2 driver based on user configuration
	I0818 19:57:57.690772   67330 start.go:297] selected driver: kvm2
	I0818 19:57:57.690797   67330 start.go:901] validating driver "kvm2" against <nil>
	I0818 19:57:57.690813   67330 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:57:57.691597   67330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:57:57.691686   67330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:57:57.710728   67330 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:57:57.710785   67330 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 19:57:57.711052   67330 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:57:57.711111   67330 cni.go:84] Creating CNI manager for ""
	I0818 19:57:57.711125   67330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:57:57.711138   67330 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 19:57:57.711203   67330 start.go:340] cluster config:
	{Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:57:57.711315   67330 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:57:57.713830   67330 out.go:177] * Starting "old-k8s-version-247539" primary control-plane node in "old-k8s-version-247539" cluster
	I0818 19:57:57.715569   67330 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 19:57:57.715604   67330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0818 19:57:57.715613   67330 cache.go:56] Caching tarball of preloaded images
	I0818 19:57:57.715707   67330 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:57:57.715720   67330 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0818 19:57:57.715807   67330 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 19:57:57.715824   67330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json: {Name:mkb4188f9b593942a2eada7595484de4b7d28645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:57:57.715974   67330 start.go:360] acquireMachinesLock for old-k8s-version-247539: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:57:57.716009   67330 start.go:364] duration metric: took 18.162µs to acquireMachinesLock for "old-k8s-version-247539"
	I0818 19:57:57.716032   67330 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 19:57:57.716103   67330 start.go:125] createHost starting for "" (driver="kvm2")
	I0818 19:57:54.590021   65906 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 19:57:54.606046   65906 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 19:57:54.642317   65906 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 19:57:54.642395   65906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:57:54.663298   65906 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 19:57:54.663367   65906 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:57:54.698960   65906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:57:54.720445   65906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:57:54.734003   65906 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 19:57:54.755029   65906 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:57:54.788598   65906 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:57:54.803076   65906 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:57:54.814813   65906 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 19:57:54.825238   65906 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 19:57:54.835904   65906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:57:54.991019   65906 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 19:57:57.744848   65906 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.75379961s)
	I0818 19:57:57.744875   65906 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 19:57:57.744910   65906 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 19:57:57.751351   65906 start.go:563] Will wait 60s for crictl version
	I0818 19:57:57.751424   65906 ssh_runner.go:195] Run: which crictl
	I0818 19:57:57.768410   65906 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 19:57:57.833594   65906 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 19:57:57.833665   65906 ssh_runner.go:195] Run: crio --version
	I0818 19:57:57.882777   65906 ssh_runner.go:195] Run: crio --version
	I0818 19:57:57.916895   65906 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 19:57:57.918364   65906 main.go:141] libmachine: (kubernetes-upgrade-179876) Calling .GetIP
	I0818 19:57:57.921895   65906 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:57:57.922403   65906 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:04:0a", ip: ""} in network mk-kubernetes-upgrade-179876: {Iface:virbr3 ExpiryTime:2024-08-18 20:57:11 +0000 UTC Type:0 Mac:52:54:00:dd:04:0a Iaid: IPaddr:192.168.61.147 Prefix:24 Hostname:kubernetes-upgrade-179876 Clientid:01:52:54:00:dd:04:0a}
	I0818 19:57:57.922436   65906 main.go:141] libmachine: (kubernetes-upgrade-179876) DBG | domain kubernetes-upgrade-179876 has defined IP address 192.168.61.147 and MAC address 52:54:00:dd:04:0a in network mk-kubernetes-upgrade-179876
	I0818 19:57:57.922723   65906 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 19:57:57.927920   65906 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-179876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-179876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 19:57:57.928072   65906 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:57:57.928173   65906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:57:57.986234   65906 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:57:57.986260   65906 crio.go:433] Images already preloaded, skipping extraction
	I0818 19:57:57.986314   65906 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:57:58.039522   65906 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:57:58.039546   65906 cache_images.go:84] Images are preloaded, skipping loading
	I0818 19:57:58.039554   65906 kubeadm.go:934] updating node { 192.168.61.147 8443 v1.31.0 crio true true} ...
	I0818 19:57:58.039642   65906 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-179876 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.147
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-179876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 19:57:58.039713   65906 ssh_runner.go:195] Run: crio config
	I0818 19:57:58.113167   65906 cni.go:84] Creating CNI manager for ""
	I0818 19:57:58.113190   65906 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:57:58.113202   65906 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 19:57:58.113229   65906 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.147 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-179876 NodeName:kubernetes-upgrade-179876 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.147"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.147 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 19:57:58.113409   65906 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.147
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-179876"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.147
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.147"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 19:57:58.113469   65906 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 19:57:58.135890   65906 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 19:57:58.135957   65906 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 19:57:58.152239   65906 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0818 19:57:58.174756   65906 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 19:57:58.204991   65906 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0818 19:57:58.229113   65906 ssh_runner.go:195] Run: grep 192.168.61.147	control-plane.minikube.internal$ /etc/hosts
	I0818 19:57:58.235007   65906 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:57:58.430907   65906 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:57:58.449278   65906 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876 for IP: 192.168.61.147
	I0818 19:57:58.449302   65906 certs.go:194] generating shared ca certs ...
	I0818 19:57:58.449320   65906 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:57:58.449497   65906 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 19:57:58.449543   65906 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 19:57:58.449553   65906 certs.go:256] generating profile certs ...
	I0818 19:57:58.449650   65906 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/client.key
	I0818 19:57:58.449693   65906 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.key.7c7b5bcd
	I0818 19:57:58.449731   65906 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.key
	I0818 19:57:58.449832   65906 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 19:57:58.449854   65906 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 19:57:58.449865   65906 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 19:57:58.449887   65906 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 19:57:58.449907   65906 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 19:57:58.449924   65906 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 19:57:58.449956   65906 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:57:58.450543   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 19:57:58.481018   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 19:57:58.511143   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 19:57:58.542043   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 19:57:58.571582   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0818 19:57:58.606374   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 19:57:58.637617   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 19:57:58.669499   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kubernetes-upgrade-179876/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 19:57:58.701925   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 19:57:58.803793   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 19:57:58.880625   65906 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 19:57:58.927752   65906 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 19:57:58.964244   65906 ssh_runner.go:195] Run: openssl version
	I0818 19:57:58.976256   65906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 19:57:58.997937   65906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 19:57:59.009081   65906 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:57:59.009152   65906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 19:57:59.053126   65906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 19:57:59.105889   65906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 19:57:59.148999   65906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 19:57:59.174784   65906 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:57:59.174944   65906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 19:57:59.191170   65906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 19:57:59.239704   65906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 19:57:59.277259   65906 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:57:59.294010   65906 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:57:59.294084   65906 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:57:59.326659   65906 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 19:57:59.353269   65906 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:57:59.367910   65906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 19:57:59.393276   65906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 19:57:59.408045   65906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 19:57:59.419737   65906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 19:57:59.430951   65906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 19:57:59.442007   65906 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 19:57:59.450197   65906 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-179876 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-179876 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.147 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:57:59.450308   65906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 19:57:59.450375   65906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 19:57:56.225556   65481 out.go:235]   - Booting up control plane ...
	I0818 19:57:56.225675   65481 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 19:57:56.225784   65481 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 19:57:56.225892   65481 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 19:57:56.257581   65481 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 19:57:56.266176   65481 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 19:57:56.266249   65481 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 19:57:56.403593   65481 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 19:57:56.403771   65481 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 19:57:57.407805   65481 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004861265s
	I0818 19:57:57.407879   65481 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 19:57:57.717820   67330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 19:57:57.718000   67330 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:57:57.718047   67330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:57:57.736836   67330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38881
	I0818 19:57:57.737315   67330 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:57:57.738421   67330 main.go:141] libmachine: Using API Version  1
	I0818 19:57:57.738444   67330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:57:57.738861   67330 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:57:57.739197   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 19:57:57.739369   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 19:57:57.739567   67330 start.go:159] libmachine.API.Create for "old-k8s-version-247539" (driver="kvm2")
	I0818 19:57:57.739593   67330 client.go:168] LocalClient.Create starting
	I0818 19:57:57.739638   67330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 19:57:57.739684   67330 main.go:141] libmachine: Decoding PEM data...
	I0818 19:57:57.739706   67330 main.go:141] libmachine: Parsing certificate...
	I0818 19:57:57.739789   67330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 19:57:57.739815   67330 main.go:141] libmachine: Decoding PEM data...
	I0818 19:57:57.739834   67330 main.go:141] libmachine: Parsing certificate...
	I0818 19:57:57.739863   67330 main.go:141] libmachine: Running pre-create checks...
	I0818 19:57:57.739880   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .PreCreateCheck
	I0818 19:57:57.740278   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 19:57:57.740661   67330 main.go:141] libmachine: Creating machine...
	I0818 19:57:57.740676   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .Create
	I0818 19:57:57.740819   67330 main.go:141] libmachine: (old-k8s-version-247539) Creating KVM machine...
	I0818 19:57:57.742438   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found existing default KVM network
	I0818 19:57:57.744320   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:57.744138   67359 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b9:2d:b9} reservation:<nil>}
	I0818 19:57:57.745875   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:57.745799   67359 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000348110}
	I0818 19:57:57.746022   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | created network xml: 
	I0818 19:57:57.746047   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | <network>
	I0818 19:57:57.746062   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   <name>mk-old-k8s-version-247539</name>
	I0818 19:57:57.746075   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   <dns enable='no'/>
	I0818 19:57:57.746085   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   
	I0818 19:57:57.746099   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0818 19:57:57.746109   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |     <dhcp>
	I0818 19:57:57.746131   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0818 19:57:57.746141   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |     </dhcp>
	I0818 19:57:57.746147   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   </ip>
	I0818 19:57:57.746155   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   
	I0818 19:57:57.746163   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | </network>
	I0818 19:57:57.746176   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | 
	I0818 19:57:57.751775   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | trying to create private KVM network mk-old-k8s-version-247539 192.168.50.0/24...
	I0818 19:57:57.864938   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539 ...
	I0818 19:57:57.864962   67330 main.go:141] libmachine: (old-k8s-version-247539) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 19:57:57.864978   67330 main.go:141] libmachine: (old-k8s-version-247539) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 19:57:57.864992   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | private KVM network mk-old-k8s-version-247539 192.168.50.0/24 created
	I0818 19:57:57.865003   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:57.864237   67359 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:57:58.184522   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:58.181782   67359 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa...
	I0818 19:57:58.575954   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:58.575836   67359 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/old-k8s-version-247539.rawdisk...
	I0818 19:57:58.575997   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Writing magic tar header
	I0818 19:57:58.576014   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Writing SSH key tar header
	I0818 19:57:58.576027   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:58.575958   67359 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539 ...
	I0818 19:57:58.576043   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539
	I0818 19:57:58.576133   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539 (perms=drwx------)
	I0818 19:57:58.576162   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 19:57:58.576175   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 19:57:58.576193   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:57:58.576206   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 19:57:58.576222   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 19:57:58.576241   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins
	I0818 19:57:58.576256   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 19:57:58.576268   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home
	I0818 19:57:58.576282   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 19:57:58.576294   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 19:57:58.576307   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 19:57:58.576319   67330 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 19:57:58.576333   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Skipping /home - not owner
	I0818 19:57:58.577273   67330 main.go:141] libmachine: (old-k8s-version-247539) define libvirt domain using xml: 
	I0818 19:57:58.577315   67330 main.go:141] libmachine: (old-k8s-version-247539) <domain type='kvm'>
	I0818 19:57:58.577326   67330 main.go:141] libmachine: (old-k8s-version-247539)   <name>old-k8s-version-247539</name>
	I0818 19:57:58.577337   67330 main.go:141] libmachine: (old-k8s-version-247539)   <memory unit='MiB'>2200</memory>
	I0818 19:57:58.577346   67330 main.go:141] libmachine: (old-k8s-version-247539)   <vcpu>2</vcpu>
	I0818 19:57:58.577356   67330 main.go:141] libmachine: (old-k8s-version-247539)   <features>
	I0818 19:57:58.577364   67330 main.go:141] libmachine: (old-k8s-version-247539)     <acpi/>
	I0818 19:57:58.577375   67330 main.go:141] libmachine: (old-k8s-version-247539)     <apic/>
	I0818 19:57:58.577392   67330 main.go:141] libmachine: (old-k8s-version-247539)     <pae/>
	I0818 19:57:58.577403   67330 main.go:141] libmachine: (old-k8s-version-247539)     
	I0818 19:57:58.577411   67330 main.go:141] libmachine: (old-k8s-version-247539)   </features>
	I0818 19:57:58.577422   67330 main.go:141] libmachine: (old-k8s-version-247539)   <cpu mode='host-passthrough'>
	I0818 19:57:58.577431   67330 main.go:141] libmachine: (old-k8s-version-247539)   
	I0818 19:57:58.577442   67330 main.go:141] libmachine: (old-k8s-version-247539)   </cpu>
	I0818 19:57:58.577450   67330 main.go:141] libmachine: (old-k8s-version-247539)   <os>
	I0818 19:57:58.577460   67330 main.go:141] libmachine: (old-k8s-version-247539)     <type>hvm</type>
	I0818 19:57:58.577472   67330 main.go:141] libmachine: (old-k8s-version-247539)     <boot dev='cdrom'/>
	I0818 19:57:58.577482   67330 main.go:141] libmachine: (old-k8s-version-247539)     <boot dev='hd'/>
	I0818 19:57:58.577494   67330 main.go:141] libmachine: (old-k8s-version-247539)     <bootmenu enable='no'/>
	I0818 19:57:58.577505   67330 main.go:141] libmachine: (old-k8s-version-247539)   </os>
	I0818 19:57:58.577515   67330 main.go:141] libmachine: (old-k8s-version-247539)   <devices>
	I0818 19:57:58.577524   67330 main.go:141] libmachine: (old-k8s-version-247539)     <disk type='file' device='cdrom'>
	I0818 19:57:58.577542   67330 main.go:141] libmachine: (old-k8s-version-247539)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/boot2docker.iso'/>
	I0818 19:57:58.577554   67330 main.go:141] libmachine: (old-k8s-version-247539)       <target dev='hdc' bus='scsi'/>
	I0818 19:57:58.577567   67330 main.go:141] libmachine: (old-k8s-version-247539)       <readonly/>
	I0818 19:57:58.577577   67330 main.go:141] libmachine: (old-k8s-version-247539)     </disk>
	I0818 19:57:58.577587   67330 main.go:141] libmachine: (old-k8s-version-247539)     <disk type='file' device='disk'>
	I0818 19:57:58.577600   67330 main.go:141] libmachine: (old-k8s-version-247539)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 19:57:58.577623   67330 main.go:141] libmachine: (old-k8s-version-247539)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/old-k8s-version-247539.rawdisk'/>
	I0818 19:57:58.577637   67330 main.go:141] libmachine: (old-k8s-version-247539)       <target dev='hda' bus='virtio'/>
	I0818 19:57:58.577646   67330 main.go:141] libmachine: (old-k8s-version-247539)     </disk>
	I0818 19:57:58.577659   67330 main.go:141] libmachine: (old-k8s-version-247539)     <interface type='network'>
	I0818 19:57:58.577672   67330 main.go:141] libmachine: (old-k8s-version-247539)       <source network='mk-old-k8s-version-247539'/>
	I0818 19:57:58.577684   67330 main.go:141] libmachine: (old-k8s-version-247539)       <model type='virtio'/>
	I0818 19:57:58.577694   67330 main.go:141] libmachine: (old-k8s-version-247539)     </interface>
	I0818 19:57:58.577705   67330 main.go:141] libmachine: (old-k8s-version-247539)     <interface type='network'>
	I0818 19:57:58.577717   67330 main.go:141] libmachine: (old-k8s-version-247539)       <source network='default'/>
	I0818 19:57:58.577736   67330 main.go:141] libmachine: (old-k8s-version-247539)       <model type='virtio'/>
	I0818 19:57:58.577749   67330 main.go:141] libmachine: (old-k8s-version-247539)     </interface>
	I0818 19:57:58.577757   67330 main.go:141] libmachine: (old-k8s-version-247539)     <serial type='pty'>
	I0818 19:57:58.577768   67330 main.go:141] libmachine: (old-k8s-version-247539)       <target port='0'/>
	I0818 19:57:58.577780   67330 main.go:141] libmachine: (old-k8s-version-247539)     </serial>
	I0818 19:57:58.577792   67330 main.go:141] libmachine: (old-k8s-version-247539)     <console type='pty'>
	I0818 19:57:58.577802   67330 main.go:141] libmachine: (old-k8s-version-247539)       <target type='serial' port='0'/>
	I0818 19:57:58.577813   67330 main.go:141] libmachine: (old-k8s-version-247539)     </console>
	I0818 19:57:58.577824   67330 main.go:141] libmachine: (old-k8s-version-247539)     <rng model='virtio'>
	I0818 19:57:58.577836   67330 main.go:141] libmachine: (old-k8s-version-247539)       <backend model='random'>/dev/random</backend>
	I0818 19:57:58.577847   67330 main.go:141] libmachine: (old-k8s-version-247539)     </rng>
	I0818 19:57:58.577855   67330 main.go:141] libmachine: (old-k8s-version-247539)     
	I0818 19:57:58.577865   67330 main.go:141] libmachine: (old-k8s-version-247539)     
	I0818 19:57:58.577874   67330 main.go:141] libmachine: (old-k8s-version-247539)   </devices>
	I0818 19:57:58.577884   67330 main.go:141] libmachine: (old-k8s-version-247539) </domain>
	I0818 19:57:58.577894   67330 main.go:141] libmachine: (old-k8s-version-247539) 
	I0818 19:57:58.583173   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:64:f0:ab in network default
	I0818 19:57:58.583914   67330 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 19:57:58.583937   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:57:58.584805   67330 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 19:57:58.585149   67330 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 19:57:58.585834   67330 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 19:57:58.586674   67330 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 19:58:00.450709   67330 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 19:58:00.451936   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:00.452392   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:00.452434   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:00.452377   67359 retry.go:31] will retry after 241.241265ms: waiting for machine to come up
	I0818 19:58:00.695927   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:00.696655   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:00.696676   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:00.696568   67359 retry.go:31] will retry after 375.625845ms: waiting for machine to come up
	I0818 19:58:01.074552   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:01.075182   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:01.075329   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:01.075291   67359 retry.go:31] will retry after 377.725453ms: waiting for machine to come up
	I0818 19:58:01.454965   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:01.455653   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:01.455677   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:01.455600   67359 retry.go:31] will retry after 490.039131ms: waiting for machine to come up
	I0818 19:58:01.946926   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:01.947504   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:01.947537   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:01.947457   67359 retry.go:31] will retry after 578.750617ms: waiting for machine to come up
	I0818 19:58:02.527972   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:02.528785   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:02.528807   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:02.528707   67359 retry.go:31] will retry after 627.941976ms: waiting for machine to come up
	I0818 19:58:03.407867   65481 kubeadm.go:310] [api-check] The API server is healthy after 6.003262087s
	I0818 19:58:03.424809   65481 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 19:58:03.447480   65481 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 19:58:03.487525   65481 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 19:58:03.487799   65481 kubeadm.go:310] [mark-control-plane] Marking the node bridge-754609 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 19:58:03.501710   65481 kubeadm.go:310] [bootstrap-token] Using token: bvxk5t.g1urbx3qw65yqf98
	I0818 19:57:59.543815   65906 cri.go:89] found id: "ca1f4cb356a75f7bfde64dda7d24f4601f9d3fcfa3df6f7354b1d3eafe0903e5"
	I0818 19:57:59.543837   65906 cri.go:89] found id: "9d98585db0b559b31b71839d5b6220043b63d3e05d830a8de78de939751c4775"
	I0818 19:57:59.543843   65906 cri.go:89] found id: "7230fb59b2f2d3852cb767e2626df8e01449a153e33ca8132e9c19db05d29985"
	I0818 19:57:59.543847   65906 cri.go:89] found id: "ee405099c7f45a4febe5468543891f0637ed9ce408811dbb7036d0a22a67ea16"
	I0818 19:57:59.543851   65906 cri.go:89] found id: "708d2205368e9873e2c2d4f9437678689221733bd3e648833d1f2ffdf21b7cb6"
	I0818 19:57:59.543856   65906 cri.go:89] found id: "e52aca55f497ca47ee83db599517c16dce5647eb5b59562db55051751635fd5c"
	I0818 19:57:59.543860   65906 cri.go:89] found id: "04eaab77f54c75793e37a693347b4c3141dc2fb55593f3749294694704d3430a"
	I0818 19:57:59.543864   65906 cri.go:89] found id: "e3c3304ca749a18baa3ba8b41c4090e9088c62ca754cb8fe87d22a6da654f1c9"
	I0818 19:57:59.543868   65906 cri.go:89] found id: "4ebb989b48a02729cf66f6d3c8c0e6a4e3b96580e5198bfe32a2fe5dc65973b6"
	I0818 19:57:59.543877   65906 cri.go:89] found id: "23cab6a4f8d418203c2d5ccebc3b72c6bbdcbe91e92177d9d9a3d6b676e07a10"
	I0818 19:57:59.543881   65906 cri.go:89] found id: "c7e54d6bf71d4026f91bf14d5a4ff83f6fdc64467f5aeccaf9689abb017f3e65"
	I0818 19:57:59.543885   65906 cri.go:89] found id: ""
	I0818 19:57:59.543928   65906 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 18 19:58:19 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:19.968794667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9ed2dc5-78e8-47d8-a29c-990157ee299b name=/runtime.v1.RuntimeService/Version
	Aug 18 19:58:19 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:19.970171494Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25582716-1cdf-4342-94c9-41606d55f1a6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:58:19 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:19.970634154Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724011099970608928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25582716-1cdf-4342-94c9-41606d55f1a6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:58:19 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:19.971141558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fdc54f7-ea6d-4ba1-bd2b-293d43e344b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:19 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:19.971214691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fdc54f7-ea6d-4ba1-bd2b-293d43e344b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:19 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:19.971668254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba5da75e7d251bd0f6281022a659fa5f172b10ca5c182e5ffd561068a2db2351,PodSandboxId:d06663dee020456eb78783d9235f08441e3b2e45e474814838fc7c38ccd82359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011096240583322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd4ced-6e0f-4c2e-8a5b-dc990926969e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f19050b6335506f67fb9f2e7b4d5521161a70cc19622d4515a5802dc3e4a68,PodSandboxId:9f53c2cc1b12cbc0b5b38fabf2f0b0d24692b31cf1470a7d06e96a5a3e1daf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011092451872867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e30ea4a577e60521bb7591862280ef6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769b762b934da3a73a2814aa125818da33e2517ba393199f09cda35642fc8363,PodSandboxId:66de8910a49f16f2b7a4524105c1492a5b13d834fbaf99bfe8ff5bb8cb9957f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011092444390175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9079ea0e8a2dda67117cb2eef386dc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df770d81bd12c09b509043db2f3d4e607215f740095206d0408def63854b9e51,PodSandboxId:68e7e813f2ab0ec112b7452d8399388edcdc78b6c4cc49b6884ffe50e15d0fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011092431280785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61bd513cc048fe2d9297187b10bf6c77,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61bdd79f3331f4f87959cddac19950bb00a89b268c0bc6f280f8f2767021d7d1,PodSandboxId:de29e07ed11ceeee06f79908451f4b9cc7ac2bce2b846bf17a72746525ab5f15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011086581110658,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpfhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00f8d498-50ea-4756-b2dd-4d6d37f933e1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20a6e151b761b88cd390536ba553b7e227893a39d7454f247416a765bb7eef3,PodSandboxId:69b80fa879408df2897583820e1d482c14bba891e180457bde2e4b2925f2762e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011082024222570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e93bfc2a82a74d0664eb6a289bb6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d9d9958c1ba66a8429d962fda0ffb383a2d93f5bf0e44b055e48c2ec270dac,PodSandboxId:14151fa7ea7f4fc351f932c0d15c4091236dcb2357f74c15e003610200307490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011082011733081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z98bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985e7451-0b20-4749-8f2b-dce25096ff4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"
protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a920e23d5c7519308c04c484392b5dfddb8b2939a3c1ae772ac00aa6a153155,PodSandboxId:99e4e513728f7377b2003fabf7a5f04e195ad8cff2ab6ce7e47f45e43e92ef0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011081504150978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh4g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2c1c63-c4cc-4f28-a7c0-578a6afd4a1f,},Annotations:
map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d98585db0b559b31b71839d5b6220043b63d3e05d830a8de78de939751c4775,PodSandboxId:9f53c2cc1b12cbc0b5b38fabf2f0b0d24692b31cf1470a7d06e96a5a3e1daf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724011079036547544,Labels:map[string]string{io.kubernet
es.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e30ea4a577e60521bb7591862280ef6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1f4cb356a75f7bfde64dda7d24f4601f9d3fcfa3df6f7354b1d3eafe0903e5,PodSandboxId:66de8910a49f16f2b7a4524105c1492a5b13d834fbaf99bfe8ff5bb8cb9957f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011079062875748,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9079ea0e8a2dda67117cb2eef386dc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7230fb59b2f2d3852cb767e2626df8e01449a153e33ca8132e9c19db05d29985,PodSandboxId:68e7e813f2ab0ec112b7452d8399388edcdc78b6c4cc49b6884ffe50e15d0fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724011078992390924,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61bd513cc048fe2d9297187b10bf6c77,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee405099c7f45a4febe5468543891f0637ed9ce408811dbb7036d0a22a67ea16,PodSandboxId:709f4394efdd40bd915d5405cee349e590f92929e1160d82cea1e71a3b9e9693,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724011061593145997,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z98bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985e7451-0b20-4749-8f2b-dce25096ff4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708d2205368e9873e2c2d4f9437678689221733bd3e648833d1f2ffdf21b7cb6,PodSandboxId:78b02458b65382ab4dabde03603899141946ad673b541e1447a77f19641f80c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724011061553543895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh4g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2c1c63-c4cc-4f28-a7c0-578a6afd4a1f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04eaab77f54c75793e37a693347b4c3141dc2fb55593f3749294694704d3430a,PodSandboxId:7f9e35b5e123ec870da34ef9907fe54916e792a14b764affa021c2c38bf9e8c6,Meta
data:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724011061045844404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpfhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00f8d498-50ea-4756-b2dd-4d6d37f933e1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c3304ca749a18baa3ba8b41c4090e9088c62ca754cb8fe87d22a6da654f1c9,PodSandboxId:b79c61e5cca4f7ff17bbe0f52adc189d5df755f927c6dc5bc1fca31d60bebf74,Metadata:&ContainerMetadata{Name:kube-s
cheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724011048764362607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e93bfc2a82a74d0664eb6a289bb6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fdc54f7-ea6d-4ba1-bd2b-293d43e344b7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.011512377Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5cbd59d-918c-4662-bdcd-20a14a6aca20 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.011875413Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d06663dee020456eb78783d9235f08441e3b2e45e474814838fc7c38ccd82359,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:eccd4ced-6e0f-4c2e-8a5b-dc990926969e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724011088479673112,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd4ced-6e0f-4c2e-8a5b-dc990926969e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\
":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-18T19:57:40.571766574Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de29e07ed11ceeee06f79908451f4b9cc7ac2bce2b846bf17a72746525ab5f15,Metadata:&PodSandboxMetadata{Name:kube-proxy-zpfhq,Uid:00f8d498-50ea-4756-b2dd-4d6d37f933e1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724011086478372115,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zpfhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00f8d498-50ea-4756-b
2dd-4d6d37f933e1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:57:40.535182679Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69b80fa879408df2897583820e1d482c14bba891e180457bde2e4b2925f2762e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-179876,Uid:8e93bfc2a82a74d0664eb6a289bb6cd8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724011081285985408,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e93bfc2a82a74d0664eb6a289bb6cd8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8e93bfc2a82a74d0664eb6a289bb6cd8,kubernetes.io/config.seen: 2024-08-18T19:57:27.416553381Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:14151fa7ea7f4fc351f932c0d15c4091236dcb2357f74c15e003610
200307490,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-z98bd,Uid:985e7451-0b20-4749-8f2b-dce25096ff4e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724011080971438876,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-z98bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985e7451-0b20-4749-8f2b-dce25096ff4e,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:57:40.647518846Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:99e4e513728f7377b2003fabf7a5f04e195ad8cff2ab6ce7e47f45e43e92ef0f,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-zh4g2,Uid:5b2c1c63-c4cc-4f28-a7c0-578a6afd4a1f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724011080947976393,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-zh4g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 5b2c1c63-c4cc-4f28-a7c0-578a6afd4a1f,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:57:40.626931150Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:68e7e813f2ab0ec112b7452d8399388edcdc78b6c4cc49b6884ffe50e15d0fc4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-179876,Uid:61bd513cc048fe2d9297187b10bf6c77,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724011078743038755,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61bd513cc048fe2d9297187b10bf6c77,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 61bd513cc048fe2d9297187b10bf6c77,kubernetes.io/config.seen: 2024-08-18T19:57:27.416552371Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6
6de8910a49f16f2b7a4524105c1492a5b13d834fbaf99bfe8ff5bb8cb9957f5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-179876,Uid:ee9079ea0e8a2dda67117cb2eef386dc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724011078730827424,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9079ea0e8a2dda67117cb2eef386dc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.147:8443,kubernetes.io/config.hash: ee9079ea0e8a2dda67117cb2eef386dc,kubernetes.io/config.seen: 2024-08-18T19:57:27.416548342Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9f53c2cc1b12cbc0b5b38fabf2f0b0d24692b31cf1470a7d06e96a5a3e1daf20,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-179876,Uid:1e30ea4a577e60521bb7591862280ef6,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1724011078728939213,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e30ea4a577e60521bb7591862280ef6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.147:2379,kubernetes.io/config.hash: 1e30ea4a577e60521bb7591862280ef6,kubernetes.io/config.seen: 2024-08-18T19:57:27.477706821Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c5cbd59d-918c-4662-bdcd-20a14a6aca20 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.012892942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b195f83f-7d62-438b-9177-45652afab89e name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.012974870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b195f83f-7d62-438b-9177-45652afab89e name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.013248846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:88f19050b6335506f67fb9f2e7b4d5521161a70cc19622d4515a5802dc3e4a68,PodSandboxId:9f53c2cc1b12cbc0b5b38fabf2f0b0d24692b31cf1470a7d06e96a5a3e1daf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011092451872867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e30ea4a577e60521bb7591862280ef6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769b762b934da3a73a2814aa125818da33e2517ba393199f09cda35642fc8363,PodSandboxId:66de8910a49f16f2b7a4524105c1492a5b13d834fbaf99bfe8ff5bb8cb9957f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011092444390175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9079ea0e8a2dda67117cb2eef386dc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df770d81bd12c09b509043db2f3d4e607215f740095206d0408def63854b9e51,PodSandboxId:68e7e813f2ab0ec112b7452d8399388edcdc78b6c4cc49b6884ffe50e15d0fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011092431280785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61bd513cc048fe2d9297187b10bf6c77,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61bdd79f3331f4f87959cddac19950bb00a89b268c0bc6f280f8f2767021d7d1,PodSandboxId:de29e07ed11ceeee06f79908451f4b9cc7ac2bce2b846bf17a72746525ab5f15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011086581110658,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpfhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00f8d498-50ea-4756-b2dd-4d6d37f933e1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /
dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20a6e151b761b88cd390536ba553b7e227893a39d7454f247416a765bb7eef3,PodSandboxId:69b80fa879408df2897583820e1d482c14bba891e180457bde2e4b2925f2762e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011082024222570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e93bfc2a82a74d0664eb6a289bb6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d9d9958c1ba66a8429d962fda0ffb383a2d93f5bf0e44b055e48c2ec270dac,PodSandboxId:14151fa7ea7f4fc351f932c0d15c4091236dcb2357f74c15e003610200307490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011082011733081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z98bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985e7451-0b20-4749-8f2b-dce25096ff4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a920e23d5c7519308c04c484392b5dfddb8b2939a3c1ae772ac00aa6a153155,PodSandboxId:99e4e513728f7377b2003fabf7a5f04e195ad8cff2ab6ce7e47f45e43e92ef0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011081504150978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh4g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2c1c63-c4cc-4f28-a7c0-578a6afd4a1f,}
,Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b195f83f-7d62-438b-9177-45652afab89e name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.027204380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f60f03e-0566-47ad-9c9e-b80b6d3d209a name=/runtime.v1.RuntimeService/Version
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.027309891Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f60f03e-0566-47ad-9c9e-b80b6d3d209a name=/runtime.v1.RuntimeService/Version
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.028824424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c0d65d7-d8e7-44a8-9aab-176319389b9c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.029666874Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724011100029629504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c0d65d7-d8e7-44a8-9aab-176319389b9c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.030265957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdb1c03e-23f2-41a4-b175-dcf255262989 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.030356973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdb1c03e-23f2-41a4-b175-dcf255262989 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.030989830Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba5da75e7d251bd0f6281022a659fa5f172b10ca5c182e5ffd561068a2db2351,PodSandboxId:d06663dee020456eb78783d9235f08441e3b2e45e474814838fc7c38ccd82359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011096240583322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd4ced-6e0f-4c2e-8a5b-dc990926969e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f19050b6335506f67fb9f2e7b4d5521161a70cc19622d4515a5802dc3e4a68,PodSandboxId:9f53c2cc1b12cbc0b5b38fabf2f0b0d24692b31cf1470a7d06e96a5a3e1daf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011092451872867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e30ea4a577e60521bb7591862280ef6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769b762b934da3a73a2814aa125818da33e2517ba393199f09cda35642fc8363,PodSandboxId:66de8910a49f16f2b7a4524105c1492a5b13d834fbaf99bfe8ff5bb8cb9957f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011092444390175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9079ea0e8a2dda67117cb2eef386dc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df770d81bd12c09b509043db2f3d4e607215f740095206d0408def63854b9e51,PodSandboxId:68e7e813f2ab0ec112b7452d8399388edcdc78b6c4cc49b6884ffe50e15d0fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011092431280785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61bd513cc048fe2d9297187b10bf6c77,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61bdd79f3331f4f87959cddac19950bb00a89b268c0bc6f280f8f2767021d7d1,PodSandboxId:de29e07ed11ceeee06f79908451f4b9cc7ac2bce2b846bf17a72746525ab5f15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011086581110658,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpfhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00f8d498-50ea-4756-b2dd-4d6d37f933e1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20a6e151b761b88cd390536ba553b7e227893a39d7454f247416a765bb7eef3,PodSandboxId:69b80fa879408df2897583820e1d482c14bba891e180457bde2e4b2925f2762e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011082024222570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e93bfc2a82a74d0664eb6a289bb6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d9d9958c1ba66a8429d962fda0ffb383a2d93f5bf0e44b055e48c2ec270dac,PodSandboxId:14151fa7ea7f4fc351f932c0d15c4091236dcb2357f74c15e003610200307490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011082011733081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z98bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985e7451-0b20-4749-8f2b-dce25096ff4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"
protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a920e23d5c7519308c04c484392b5dfddb8b2939a3c1ae772ac00aa6a153155,PodSandboxId:99e4e513728f7377b2003fabf7a5f04e195ad8cff2ab6ce7e47f45e43e92ef0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011081504150978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh4g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2c1c63-c4cc-4f28-a7c0-578a6afd4a1f,},Annotations:
map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d98585db0b559b31b71839d5b6220043b63d3e05d830a8de78de939751c4775,PodSandboxId:9f53c2cc1b12cbc0b5b38fabf2f0b0d24692b31cf1470a7d06e96a5a3e1daf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724011079036547544,Labels:map[string]string{io.kubernet
es.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e30ea4a577e60521bb7591862280ef6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1f4cb356a75f7bfde64dda7d24f4601f9d3fcfa3df6f7354b1d3eafe0903e5,PodSandboxId:66de8910a49f16f2b7a4524105c1492a5b13d834fbaf99bfe8ff5bb8cb9957f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011079062875748,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9079ea0e8a2dda67117cb2eef386dc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7230fb59b2f2d3852cb767e2626df8e01449a153e33ca8132e9c19db05d29985,PodSandboxId:68e7e813f2ab0ec112b7452d8399388edcdc78b6c4cc49b6884ffe50e15d0fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724011078992390924,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61bd513cc048fe2d9297187b10bf6c77,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee405099c7f45a4febe5468543891f0637ed9ce408811dbb7036d0a22a67ea16,PodSandboxId:709f4394efdd40bd915d5405cee349e590f92929e1160d82cea1e71a3b9e9693,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724011061593145997,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z98bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985e7451-0b20-4749-8f2b-dce25096ff4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708d2205368e9873e2c2d4f9437678689221733bd3e648833d1f2ffdf21b7cb6,PodSandboxId:78b02458b65382ab4dabde03603899141946ad673b541e1447a77f19641f80c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724011061553543895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh4g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2c1c63-c4cc-4f28-a7c0-578a6afd4a1f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04eaab77f54c75793e37a693347b4c3141dc2fb55593f3749294694704d3430a,PodSandboxId:7f9e35b5e123ec870da34ef9907fe54916e792a14b764affa021c2c38bf9e8c6,Meta
data:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724011061045844404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpfhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00f8d498-50ea-4756-b2dd-4d6d37f933e1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c3304ca749a18baa3ba8b41c4090e9088c62ca754cb8fe87d22a6da654f1c9,PodSandboxId:b79c61e5cca4f7ff17bbe0f52adc189d5df755f927c6dc5bc1fca31d60bebf74,Metadata:&ContainerMetadata{Name:kube-s
cheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724011048764362607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e93bfc2a82a74d0664eb6a289bb6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdb1c03e-23f2-41a4-b175-dcf255262989 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.073717181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6f86c3f-2c7b-4614-abcc-1fac6b44644a name=/runtime.v1.RuntimeService/Version
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.073849816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6f86c3f-2c7b-4614-abcc-1fac6b44644a name=/runtime.v1.RuntimeService/Version
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.075096759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c5ece6a-a326-41e4-a4a0-e00f7384228d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.075878327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724011100075816398,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c5ece6a-a326-41e4-a4a0-e00f7384228d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.076881620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=020b3683-df95-41e7-be2f-ffd95b56d92c name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.076984908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=020b3683-df95-41e7-be2f-ffd95b56d92c name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:58:20 kubernetes-upgrade-179876 crio[2347]: time="2024-08-18 19:58:20.077934372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ba5da75e7d251bd0f6281022a659fa5f172b10ca5c182e5ffd561068a2db2351,PodSandboxId:d06663dee020456eb78783d9235f08441e3b2e45e474814838fc7c38ccd82359,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011096240583322,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eccd4ced-6e0f-4c2e-8a5b-dc990926969e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f19050b6335506f67fb9f2e7b4d5521161a70cc19622d4515a5802dc3e4a68,PodSandboxId:9f53c2cc1b12cbc0b5b38fabf2f0b0d24692b31cf1470a7d06e96a5a3e1daf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011092451872867,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e30ea4a577e60521bb7591862280ef6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:769b762b934da3a73a2814aa125818da33e2517ba393199f09cda35642fc8363,PodSandboxId:66de8910a49f16f2b7a4524105c1492a5b13d834fbaf99bfe8ff5bb8cb9957f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011092444390175,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9079ea0e8a2dda67117cb2eef386dc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df770d81bd12c09b509043db2f3d4e607215f740095206d0408def63854b9e51,PodSandboxId:68e7e813f2ab0ec112b7452d8399388edcdc78b6c4cc49b6884ffe50e15d0fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011092431280785,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61bd513cc048fe2d9297187b10bf6c77,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61bdd79f3331f4f87959cddac19950bb00a89b268c0bc6f280f8f2767021d7d1,PodSandboxId:de29e07ed11ceeee06f79908451f4b9cc7ac2bce2b846bf17a72746525ab5f15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011086581110658,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpfhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00f8d498-50ea-4756-b2dd-4d6d37f933e1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e20a6e151b761b88cd390536ba553b7e227893a39d7454f247416a765bb7eef3,PodSandboxId:69b80fa879408df2897583820e1d482c14bba891e180457bde2e4b2925f2762e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011082024222570,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e93bfc2a82a74d0664eb6a289bb6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d9d9958c1ba66a8429d962fda0ffb383a2d93f5bf0e44b055e48c2ec270dac,PodSandboxId:14151fa7ea7f4fc351f932c0d15c4091236dcb2357f74c15e003610200307490,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011082011733081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z98bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985e7451-0b20-4749-8f2b-dce25096ff4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"
protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a920e23d5c7519308c04c484392b5dfddb8b2939a3c1ae772ac00aa6a153155,PodSandboxId:99e4e513728f7377b2003fabf7a5f04e195ad8cff2ab6ce7e47f45e43e92ef0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011081504150978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh4g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2c1c63-c4cc-4f28-a7c0-578a6afd4a1f,},Annotations:
map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d98585db0b559b31b71839d5b6220043b63d3e05d830a8de78de939751c4775,PodSandboxId:9f53c2cc1b12cbc0b5b38fabf2f0b0d24692b31cf1470a7d06e96a5a3e1daf20,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724011079036547544,Labels:map[string]string{io.kubernet
es.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e30ea4a577e60521bb7591862280ef6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1f4cb356a75f7bfde64dda7d24f4601f9d3fcfa3df6f7354b1d3eafe0903e5,PodSandboxId:66de8910a49f16f2b7a4524105c1492a5b13d834fbaf99bfe8ff5bb8cb9957f5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011079062875748,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9079ea0e8a2dda67117cb2eef386dc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7230fb59b2f2d3852cb767e2626df8e01449a153e33ca8132e9c19db05d29985,PodSandboxId:68e7e813f2ab0ec112b7452d8399388edcdc78b6c4cc49b6884ffe50e15d0fc4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724011078992390924,Labels:map[string]string{io.kubernetes.container.name: kube
-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61bd513cc048fe2d9297187b10bf6c77,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee405099c7f45a4febe5468543891f0637ed9ce408811dbb7036d0a22a67ea16,PodSandboxId:709f4394efdd40bd915d5405cee349e590f92929e1160d82cea1e71a3b9e9693,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724011061593145997,Labels:map[string]string{io.kubernetes.container.name:
coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-z98bd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985e7451-0b20-4749-8f2b-dce25096ff4e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:708d2205368e9873e2c2d4f9437678689221733bd3e648833d1f2ffdf21b7cb6,PodSandboxId:78b02458b65382ab4dabde03603899141946ad673b541e1447a77f19641f80c0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724011061553543895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-zh4g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b2c1c63-c4cc-4f28-a7c0-578a6afd4a1f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04eaab77f54c75793e37a693347b4c3141dc2fb55593f3749294694704d3430a,PodSandboxId:7f9e35b5e123ec870da34ef9907fe54916e792a14b764affa021c2c38bf9e8c6,Meta
data:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724011061045844404,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zpfhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 00f8d498-50ea-4756-b2dd-4d6d37f933e1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c3304ca749a18baa3ba8b41c4090e9088c62ca754cb8fe87d22a6da654f1c9,PodSandboxId:b79c61e5cca4f7ff17bbe0f52adc189d5df755f927c6dc5bc1fca31d60bebf74,Metadata:&ContainerMetadata{Name:kube-s
cheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724011048764362607,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-179876,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e93bfc2a82a74d0664eb6a289bb6cd8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=020b3683-df95-41e7-be2f-ffd95b56d92c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ba5da75e7d251       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Exited              storage-provisioner       2                   d06663dee0204       storage-provisioner
	88f19050b6335       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   9f53c2cc1b12c       etcd-kubernetes-upgrade-179876
	769b762b934da       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   7 seconds ago       Running             kube-apiserver            2                   66de8910a49f1       kube-apiserver-kubernetes-upgrade-179876
	df770d81bd12c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   7 seconds ago       Running             kube-controller-manager   2                   68e7e813f2ab0       kube-controller-manager-kubernetes-upgrade-179876
	61bdd79f3331f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   13 seconds ago      Running             kube-proxy                1                   de29e07ed11ce       kube-proxy-zpfhq
	e20a6e151b761       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   18 seconds ago      Running             kube-scheduler            1                   69b80fa879408       kube-scheduler-kubernetes-upgrade-179876
	e4d9d9958c1ba       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   1                   14151fa7ea7f4       coredns-6f6b679f8f-z98bd
	4a920e23d5c75       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago      Running             coredns                   1                   99e4e513728f7       coredns-6f6b679f8f-zh4g2
	ca1f4cb356a75       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 seconds ago      Exited              kube-apiserver            1                   66de8910a49f1       kube-apiserver-kubernetes-upgrade-179876
	9d98585db0b55       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   21 seconds ago      Exited              etcd                      1                   9f53c2cc1b12c       etcd-kubernetes-upgrade-179876
	7230fb59b2f2d       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   21 seconds ago      Exited              kube-controller-manager   1                   68e7e813f2ab0       kube-controller-manager-kubernetes-upgrade-179876
	ee405099c7f45       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   38 seconds ago      Exited              coredns                   0                   709f4394efdd4       coredns-6f6b679f8f-z98bd
	708d2205368e9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   38 seconds ago      Exited              coredns                   0                   78b02458b6538       coredns-6f6b679f8f-zh4g2
	04eaab77f54c7       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   39 seconds ago      Exited              kube-proxy                0                   7f9e35b5e123e       kube-proxy-zpfhq
	e3c3304ca749a       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   51 seconds ago      Exited              kube-scheduler            0                   b79c61e5cca4f       kube-scheduler-kubernetes-upgrade-179876
	
	
	==> coredns [4a920e23d5c7519308c04c484392b5dfddb8b2939a3c1ae772ac00aa6a153155] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [708d2205368e9873e2c2d4f9437678689221733bd3e648833d1f2ffdf21b7cb6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e4d9d9958c1ba66a8429d962fda0ffb383a2d93f5bf0e44b055e48c2ec270dac] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ee405099c7f45a4febe5468543891f0637ed9ce408811dbb7036d0a22a67ea16] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-179876
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-179876
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:57:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-179876
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:58:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:58:15 +0000   Sun, 18 Aug 2024 19:57:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:58:15 +0000   Sun, 18 Aug 2024 19:57:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:58:15 +0000   Sun, 18 Aug 2024 19:57:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:58:15 +0000   Sun, 18 Aug 2024 19:57:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.147
	  Hostname:    kubernetes-upgrade-179876
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 29fa974d502143aeb97c648e2604959a
	  System UUID:                29fa974d-5021-43ae-b97c-648e2604959a
	  Boot ID:                    796637f4-00ae-4222-a40b-9997aa4febff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-z98bd                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     40s
	  kube-system                 coredns-6f6b679f8f-zh4g2                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     40s
	  kube-system                 etcd-kubernetes-upgrade-179876                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         43s
	  kube-system                 kube-apiserver-kubernetes-upgrade-179876             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-179876    200m (10%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-zpfhq                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-scheduler-kubernetes-upgrade-179876             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 38s                kube-proxy       
	  Normal  NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)  kubelet          Node kubernetes-upgrade-179876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x7 over 53s)  kubelet          Node kubernetes-upgrade-179876 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)  kubelet          Node kubernetes-upgrade-179876 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           41s                node-controller  Node kubernetes-upgrade-179876 event: Registered Node kubernetes-upgrade-179876 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-179876 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-179876 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-179876 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-179876 event: Registered Node kubernetes-upgrade-179876 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.127846] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.067060] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060226] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.235699] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.141072] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.335538] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +4.583387] systemd-fstab-generator[731]: Ignoring "noauto" option for root device
	[  +0.065110] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.171745] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[ +11.403208] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
	[  +0.075871] kauditd_printk_skb: 97 callbacks suppressed
	[ +15.270470] systemd-fstab-generator[2184]: Ignoring "noauto" option for root device
	[  +0.088275] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.064298] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[  +0.208726] systemd-fstab-generator[2210]: Ignoring "noauto" option for root device
	[  +0.154688] systemd-fstab-generator[2222]: Ignoring "noauto" option for root device
	[  +0.411265] systemd-fstab-generator[2320]: Ignoring "noauto" option for root device
	[  +3.429273] systemd-fstab-generator[2431]: Ignoring "noauto" option for root device
	[  +0.766350] kauditd_printk_skb: 149 callbacks suppressed
	[Aug18 19:58] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.077317] systemd-fstab-generator[3358]: Ignoring "noauto" option for root device
	[  +0.137871] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.448258] kauditd_printk_skb: 43 callbacks suppressed
	[  +0.463804] systemd-fstab-generator[3738]: Ignoring "noauto" option for root device
	
	
	==> etcd [88f19050b6335506f67fb9f2e7b4d5521161a70cc19622d4515a5802dc3e4a68] <==
	{"level":"info","ts":"2024-08-18T19:58:12.746984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f switched to configuration voters=(6856775551110209807)"}
	{"level":"info","ts":"2024-08-18T19:58:12.747043Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f9c16d6162bb414b","local-member-id":"5f282908f481e10f","added-peer-id":"5f282908f481e10f","added-peer-peer-urls":["https://192.168.61.147:2380"]}
	{"level":"info","ts":"2024-08-18T19:58:12.747147Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f9c16d6162bb414b","local-member-id":"5f282908f481e10f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:58:12.747193Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T19:58:12.759770Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-18T19:58:12.760010Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"5f282908f481e10f","initial-advertise-peer-urls":["https://192.168.61.147:2380"],"listen-peer-urls":["https://192.168.61.147:2380"],"advertise-client-urls":["https://192.168.61.147:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.147:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-18T19:58:12.761498Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.147:2380"}
	{"level":"info","ts":"2024-08-18T19:58:12.762557Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.147:2380"}
	{"level":"info","ts":"2024-08-18T19:58:12.762238Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T19:58:14.021802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T19:58:14.021865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:58:14.021887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f received MsgPreVoteResp from 5f282908f481e10f at term 3"}
	{"level":"info","ts":"2024-08-18T19:58:14.021919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f became candidate at term 4"}
	{"level":"info","ts":"2024-08-18T19:58:14.021932Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f received MsgVoteResp from 5f282908f481e10f at term 4"}
	{"level":"info","ts":"2024-08-18T19:58:14.021953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f became leader at term 4"}
	{"level":"info","ts":"2024-08-18T19:58:14.021963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5f282908f481e10f elected leader 5f282908f481e10f at term 4"}
	{"level":"info","ts":"2024-08-18T19:58:14.028686Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5f282908f481e10f","local-member-attributes":"{Name:kubernetes-upgrade-179876 ClientURLs:[https://192.168.61.147:2379]}","request-path":"/0/members/5f282908f481e10f/attributes","cluster-id":"f9c16d6162bb414b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T19:58:14.028732Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:58:14.029121Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T19:58:14.029178Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T19:58:14.029184Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:58:14.029987Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:58:14.030151Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:58:14.030913Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T19:58:14.031144Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.147:2379"}
	
	
	==> etcd [9d98585db0b559b31b71839d5b6220043b63d3e05d830a8de78de939751c4775] <==
	{"level":"info","ts":"2024-08-18T19:58:01.398654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-18T19:58:01.398677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f received MsgPreVoteResp from 5f282908f481e10f at term 2"}
	{"level":"info","ts":"2024-08-18T19:58:01.398693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f became candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:58:01.398702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f received MsgVoteResp from 5f282908f481e10f at term 3"}
	{"level":"info","ts":"2024-08-18T19:58:01.398782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5f282908f481e10f became leader at term 3"}
	{"level":"info","ts":"2024-08-18T19:58:01.398797Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5f282908f481e10f elected leader 5f282908f481e10f at term 3"}
	{"level":"info","ts":"2024-08-18T19:58:01.407603Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"5f282908f481e10f","local-member-attributes":"{Name:kubernetes-upgrade-179876 ClientURLs:[https://192.168.61.147:2379]}","request-path":"/0/members/5f282908f481e10f/attributes","cluster-id":"f9c16d6162bb414b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T19:58:01.407662Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:58:01.414323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:58:01.415514Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:58:01.427260Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.147:2379"}
	{"level":"info","ts":"2024-08-18T19:58:01.434715Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:58:01.436556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T19:58:01.436601Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T19:58:01.448955Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T19:58:09.980048Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-18T19:58:09.980149Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-179876","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.147:2380"],"advertise-client-urls":["https://192.168.61.147:2379"]}
	{"level":"warn","ts":"2024-08-18T19:58:09.980262Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:58:09.980293Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:58:09.981989Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.147:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:58:09.982023Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.147:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:58:09.982092Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5f282908f481e10f","current-leader-member-id":"5f282908f481e10f"}
	{"level":"info","ts":"2024-08-18T19:58:09.986570Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.61.147:2380"}
	{"level":"info","ts":"2024-08-18T19:58:09.986821Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.61.147:2380"}
	{"level":"info","ts":"2024-08-18T19:58:09.986901Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-179876","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.147:2380"],"advertise-client-urls":["https://192.168.61.147:2379"]}
	
	
	==> kernel <==
	 19:58:20 up 1 min,  0 users,  load average: 1.64, 0.43, 0.15
	Linux kubernetes-upgrade-179876 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [769b762b934da3a73a2814aa125818da33e2517ba393199f09cda35642fc8363] <==
	I0818 19:58:15.502735       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0818 19:58:15.502870       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 19:58:15.503341       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 19:58:15.503417       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 19:58:15.503683       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0818 19:58:15.503878       1 aggregator.go:171] initial CRD sync complete...
	I0818 19:58:15.503941       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 19:58:15.503974       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 19:58:15.504052       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:58:15.524271       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 19:58:15.532575       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:58:15.532606       1 policy_source.go:224] refreshing policies
	I0818 19:58:15.574024       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0818 19:58:15.581025       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 19:58:15.581099       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 19:58:15.589637       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 19:58:16.383356       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0818 19:58:16.695510       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.147]
	I0818 19:58:16.696979       1 controller.go:615] quota admission added evaluator for: endpoints
	I0818 19:58:16.703160       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0818 19:58:17.216139       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0818 19:58:17.230103       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0818 19:58:17.390113       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0818 19:58:17.492047       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0818 19:58:17.499252       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [ca1f4cb356a75f7bfde64dda7d24f4601f9d3fcfa3df6f7354b1d3eafe0903e5] <==
	I0818 19:58:03.927906       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0818 19:58:03.928191       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0818 19:58:03.928529       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0818 19:58:03.928786       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 19:58:03.930611       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0818 19:58:03.930838       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0818 19:58:03.938565       1 remote_available_controller.go:419] Shutting down RemoteAvailability controller
	I0818 19:58:03.958031       1 controller.go:157] Shutting down quota evaluator
	I0818 19:58:03.958073       1 controller.go:176] quota evaluator worker shutdown
	I0818 19:58:03.958326       1 controller.go:176] quota evaluator worker shutdown
	I0818 19:58:03.958356       1 controller.go:176] quota evaluator worker shutdown
	I0818 19:58:03.958364       1 controller.go:176] quota evaluator worker shutdown
	I0818 19:58:03.958371       1 controller.go:176] quota evaluator worker shutdown
	W0818 19:58:04.645982       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0818 19:58:04.672642       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:05.646084       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0818 19:58:05.673341       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:06.646877       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0818 19:58:06.672375       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:07.645994       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0818 19:58:07.672765       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:08.645890       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0818 19:58:08.672434       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:09.646183       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0818 19:58:09.673422       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [7230fb59b2f2d3852cb767e2626df8e01449a153e33ca8132e9c19db05d29985] <==
	I0818 19:58:00.304303       1 serving.go:386] Generated self-signed cert in-memory
	I0818 19:58:01.479075       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0818 19:58:01.479127       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:58:01.488304       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0818 19:58:01.492136       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:58:01.492833       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 19:58:01.493388       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [df770d81bd12c09b509043db2f3d4e607215f740095206d0408def63854b9e51] <==
	I0818 19:58:18.824324       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0818 19:58:18.824632       1 shared_informer.go:320] Caches are synced for endpoint
	I0818 19:58:18.826301       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0818 19:58:18.833216       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0818 19:58:18.839733       1 shared_informer.go:320] Caches are synced for attach detach
	I0818 19:58:18.840135       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0818 19:58:18.840299       1 shared_informer.go:320] Caches are synced for crt configmap
	I0818 19:58:18.845143       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0818 19:58:18.853542       1 shared_informer.go:320] Caches are synced for HPA
	I0818 19:58:18.860579       1 shared_informer.go:320] Caches are synced for cronjob
	I0818 19:58:18.967601       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0818 19:58:18.986663       1 shared_informer.go:320] Caches are synced for daemon sets
	I0818 19:58:18.994008       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:58:19.011343       1 shared_informer.go:320] Caches are synced for taint
	I0818 19:58:19.012236       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0818 19:58:19.012353       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-179876"
	I0818 19:58:19.012413       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0818 19:58:19.041319       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="224.608093ms"
	I0818 19:58:19.041693       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:58:19.042553       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="171.114µs"
	I0818 19:58:19.067012       1 shared_informer.go:320] Caches are synced for disruption
	I0818 19:58:19.067277       1 shared_informer.go:320] Caches are synced for deployment
	I0818 19:58:19.477001       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:58:19.480238       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:58:19.480320       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [04eaab77f54c75793e37a693347b4c3141dc2fb55593f3749294694704d3430a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:57:41.551641       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:57:41.565873       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.147"]
	E0818 19:57:41.565950       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:57:41.748712       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:57:41.749060       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:57:41.749526       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:57:41.752757       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:57:41.753073       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:57:41.753088       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:57:41.754997       1 config.go:197] "Starting service config controller"
	I0818 19:57:41.755030       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:57:41.755050       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:57:41.755054       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:57:41.755427       1 config.go:326] "Starting node config controller"
	I0818 19:57:41.755517       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:57:41.855626       1 shared_informer.go:320] Caches are synced for node config
	I0818 19:57:41.855672       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:57:41.855693       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [61bdd79f3331f4f87959cddac19950bb00a89b268c0bc6f280f8f2767021d7d1] <==
	E0818 19:58:06.763037       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:58:06.765750       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-179876\": dial tcp 192.168.61.147:8443: connect: connection refused"
	E0818 19:58:07.822646       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-179876\": dial tcp 192.168.61.147:8443: connect: connection refused"
	E0818 19:58:09.928943       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-179876\": dial tcp 192.168.61.147:8443: connect: connection refused"
	I0818 19:58:15.513670       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.147"]
	E0818 19:58:15.513750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:58:15.569090       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:58:15.569144       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:58:15.569176       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:58:15.571782       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:58:15.572067       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:58:15.572096       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:58:15.573607       1 config.go:197] "Starting service config controller"
	I0818 19:58:15.573649       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:58:15.573684       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:58:15.573689       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:58:15.575402       1 config.go:326] "Starting node config controller"
	I0818 19:58:15.575434       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:58:15.673817       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 19:58:15.673891       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:58:15.675603       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e20a6e151b761b88cd390536ba553b7e227893a39d7454f247416a765bb7eef3] <==
	E0818 19:58:10.937874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.61.147:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.61.147:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:11.390580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.147:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.147:8443: connect: connection refused
	E0818 19:58:11.390641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.61.147:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.61.147:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:11.461668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.61.147:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.147:8443: connect: connection refused
	E0818 19:58:11.461796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.61.147:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.61.147:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:11.852752       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.61.147:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.147:8443: connect: connection refused
	E0818 19:58:11.852836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.61.147:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.61.147:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:12.226782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.61.147:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.147:8443: connect: connection refused
	E0818 19:58:12.226842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.61.147:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.61.147:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:12.301175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.61.147:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.147:8443: connect: connection refused
	E0818 19:58:12.301246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.61.147:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.61.147:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:12.721378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.61.147:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.147:8443: connect: connection refused
	E0818 19:58:12.721660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.61.147:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.61.147:8443: connect: connection refused" logger="UnhandledError"
	W0818 19:58:15.448162       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 19:58:15.448319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:58:15.448536       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0818 19:58:15.448646       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:58:15.448966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:58:15.449083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:58:15.449296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 19:58:15.449398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 19:58:15.450763       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 19:58:15.450804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 19:58:15.453667       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 19:58:15.455430       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	
	
	==> kube-scheduler [e3c3304ca749a18baa3ba8b41c4090e9088c62ca754cb8fe87d22a6da654f1c9] <==
	E0818 19:57:32.921261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:32.926734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:57:32.926806       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:32.969398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 19:57:32.969755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:33.075925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 19:57:33.076054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:33.204541       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 19:57:33.204602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:33.218770       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 19:57:33.218832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:33.290565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 19:57:33.290717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:33.298955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:57:33.299075       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:33.367343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 19:57:33.367564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:33.383072       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 19:57:33.383180       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:33.440695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 19:57:33.441306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:57:33.465567       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 19:57:33.465674       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0818 19:57:36.059841       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0818 19:57:47.155143       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:12.159724    3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61bd513cc048fe2d9297187b10bf6c77-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-179876\" (UID: \"61bd513cc048fe2d9297187b10bf6c77\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-179876"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:12.159739    3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61bd513cc048fe2d9297187b10bf6c77-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-179876\" (UID: \"61bd513cc048fe2d9297187b10bf6c77\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-179876"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:12.159753    3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/1e30ea4a577e60521bb7591862280ef6-etcd-data\") pod \"etcd-kubernetes-upgrade-179876\" (UID: \"1e30ea4a577e60521bb7591862280ef6\") " pod="kube-system/etcd-kubernetes-upgrade-179876"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:12.159771    3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee9079ea0e8a2dda67117cb2eef386dc-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-179876\" (UID: \"ee9079ea0e8a2dda67117cb2eef386dc\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-179876"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:12.346868    3365 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-179876"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: E0818 19:58:12.347971    3365 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.147:8443: connect: connection refused" node="kubernetes-upgrade-179876"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:12.414318    3365 scope.go:117] "RemoveContainer" containerID="7230fb59b2f2d3852cb767e2626df8e01449a153e33ca8132e9c19db05d29985"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:12.418014    3365 scope.go:117] "RemoveContainer" containerID="9d98585db0b559b31b71839d5b6220043b63d3e05d830a8de78de939751c4775"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:12.419590    3365 scope.go:117] "RemoveContainer" containerID="ca1f4cb356a75f7bfde64dda7d24f4601f9d3fcfa3df6f7354b1d3eafe0903e5"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: E0818 19:58:12.550523    3365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-179876?timeout=10s\": dial tcp 192.168.61.147:8443: connect: connection refused" interval="800ms"
	Aug 18 19:58:12 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:12.749651    3365 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-179876"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:15.556713    3365 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-179876"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:15.556925    3365 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-179876"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:15.557028    3365 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:15.559383    3365 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: E0818 19:58:15.849153    3365 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-179876\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-179876"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:15.921058    3365 apiserver.go:52] "Watching apiserver"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:15.954010    3365 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:15.966438    3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00f8d498-50ea-4756-b2dd-4d6d37f933e1-xtables-lock\") pod \"kube-proxy-zpfhq\" (UID: \"00f8d498-50ea-4756-b2dd-4d6d37f933e1\") " pod="kube-system/kube-proxy-zpfhq"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:15.966571    3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00f8d498-50ea-4756-b2dd-4d6d37f933e1-lib-modules\") pod \"kube-proxy-zpfhq\" (UID: \"00f8d498-50ea-4756-b2dd-4d6d37f933e1\") " pod="kube-system/kube-proxy-zpfhq"
	Aug 18 19:58:15 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:15.966605    3365 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/eccd4ced-6e0f-4c2e-8a5b-dc990926969e-tmp\") pod \"storage-provisioner\" (UID: \"eccd4ced-6e0f-4c2e-8a5b-dc990926969e\") " pod="kube-system/storage-provisioner"
	Aug 18 19:58:16 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:16.226285    3365 scope.go:117] "RemoveContainer" containerID="a948bdf7d2e2475589b9b78c9e301f4de410be19fd357c22ee79a812d54b447d"
	Aug 18 19:58:17 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:17.135590    3365 scope.go:117] "RemoveContainer" containerID="a948bdf7d2e2475589b9b78c9e301f4de410be19fd357c22ee79a812d54b447d"
	Aug 18 19:58:17 kubernetes-upgrade-179876 kubelet[3365]: I0818 19:58:17.135986    3365 scope.go:117] "RemoveContainer" containerID="ba5da75e7d251bd0f6281022a659fa5f172b10ca5c182e5ffd561068a2db2351"
	Aug 18 19:58:17 kubernetes-upgrade-179876 kubelet[3365]: E0818 19:58:17.136132    3365 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(eccd4ced-6e0f-4c2e-8a5b-dc990926969e)\"" pod="kube-system/storage-provisioner" podUID="eccd4ced-6e0f-4c2e-8a5b-dc990926969e"
	
	
	==> storage-provisioner [ba5da75e7d251bd0f6281022a659fa5f172b10ca5c182e5ffd561068a2db2351] <==
	I0818 19:58:16.329695       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0818 19:58:16.331395       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:58:19.476229   68114 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19423-7747/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-179876 -n kubernetes-upgrade-179876
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-179876 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-179876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-179876
--- FAIL: TestKubernetesUpgrade (405.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (87.89s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-147100 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-147100 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.00580346s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-147100] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-147100" primary control-plane node in "pause-147100" cluster
	* Updating the running kvm2 "pause-147100" VM ...
	* Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-147100" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:51:59.422448   56105 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:51:59.422711   56105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:51:59.422721   56105 out.go:358] Setting ErrFile to fd 2...
	I0818 19:51:59.422725   56105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:51:59.422901   56105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:51:59.423444   56105 out.go:352] Setting JSON to false
	I0818 19:51:59.424316   56105 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5663,"bootTime":1724005056,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:51:59.424371   56105 start.go:139] virtualization: kvm guest
	I0818 19:51:59.426513   56105 out.go:177] * [pause-147100] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:51:59.427914   56105 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:51:59.427932   56105 notify.go:220] Checking for updates...
	I0818 19:51:59.430320   56105 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:51:59.431579   56105 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:51:59.432790   56105 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:51:59.434031   56105 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:51:59.435356   56105 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:51:59.437186   56105 config.go:182] Loaded profile config "pause-147100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:51:59.437794   56105 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:51:59.437851   56105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:51:59.453189   56105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46831
	I0818 19:51:59.453598   56105 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:51:59.454084   56105 main.go:141] libmachine: Using API Version  1
	I0818 19:51:59.454127   56105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:51:59.454461   56105 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:51:59.454645   56105 main.go:141] libmachine: (pause-147100) Calling .DriverName
	I0818 19:51:59.455079   56105 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:51:59.455350   56105 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:51:59.455405   56105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:51:59.469834   56105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0818 19:51:59.470256   56105 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:51:59.470678   56105 main.go:141] libmachine: Using API Version  1
	I0818 19:51:59.470701   56105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:51:59.471065   56105 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:51:59.471269   56105 main.go:141] libmachine: (pause-147100) Calling .DriverName
	I0818 19:51:59.504752   56105 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 19:51:59.506130   56105 start.go:297] selected driver: kvm2
	I0818 19:51:59.506152   56105 start.go:901] validating driver "kvm2" against &{Name:pause-147100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-147100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:51:59.506283   56105 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:51:59.506606   56105 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:51:59.506710   56105 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:51:59.521951   56105 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:51:59.522887   56105 cni.go:84] Creating CNI manager for ""
	I0818 19:51:59.522907   56105 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:51:59.522979   56105 start.go:340] cluster config:
	{Name:pause-147100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-147100 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:51:59.523164   56105 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:51:59.524961   56105 out.go:177] * Starting "pause-147100" primary control-plane node in "pause-147100" cluster
	I0818 19:51:59.526452   56105 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:51:59.526496   56105 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 19:51:59.526505   56105 cache.go:56] Caching tarball of preloaded images
	I0818 19:51:59.526597   56105 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:51:59.526614   56105 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 19:51:59.526759   56105 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/pause-147100/config.json ...
	I0818 19:51:59.527001   56105 start.go:360] acquireMachinesLock for pause-147100: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:52:25.348181   56105 start.go:364] duration metric: took 25.821136674s to acquireMachinesLock for "pause-147100"
	I0818 19:52:25.348266   56105 start.go:96] Skipping create...Using existing machine configuration
	I0818 19:52:25.348275   56105 fix.go:54] fixHost starting: 
	I0818 19:52:25.348718   56105 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:52:25.348796   56105 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:52:25.365858   56105 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41763
	I0818 19:52:25.366281   56105 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:52:25.366809   56105 main.go:141] libmachine: Using API Version  1
	I0818 19:52:25.366829   56105 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:52:25.367196   56105 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:52:25.367431   56105 main.go:141] libmachine: (pause-147100) Calling .DriverName
	I0818 19:52:25.367622   56105 main.go:141] libmachine: (pause-147100) Calling .GetState
	I0818 19:52:25.369270   56105 fix.go:112] recreateIfNeeded on pause-147100: state=Running err=<nil>
	W0818 19:52:25.369298   56105 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 19:52:25.371450   56105 out.go:177] * Updating the running kvm2 "pause-147100" VM ...
	I0818 19:52:25.372786   56105 machine.go:93] provisionDockerMachine start ...
	I0818 19:52:25.372806   56105 main.go:141] libmachine: (pause-147100) Calling .DriverName
	I0818 19:52:25.373051   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:25.375538   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.375958   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:25.375984   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.376169   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHPort
	I0818 19:52:25.376350   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:25.376499   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:25.376617   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHUsername
	I0818 19:52:25.376771   56105 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:25.377015   56105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.46 22 <nil> <nil>}
	I0818 19:52:25.377033   56105 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 19:52:25.483840   56105 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-147100
	
	I0818 19:52:25.483870   56105 main.go:141] libmachine: (pause-147100) Calling .GetMachineName
	I0818 19:52:25.484165   56105 buildroot.go:166] provisioning hostname "pause-147100"
	I0818 19:52:25.484209   56105 main.go:141] libmachine: (pause-147100) Calling .GetMachineName
	I0818 19:52:25.484424   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:25.486952   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.487304   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:25.487331   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.487518   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHPort
	I0818 19:52:25.487704   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:25.487879   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:25.488032   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHUsername
	I0818 19:52:25.488231   56105 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:25.488440   56105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.46 22 <nil> <nil>}
	I0818 19:52:25.488453   56105 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-147100 && echo "pause-147100" | sudo tee /etc/hostname
	I0818 19:52:25.611158   56105 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-147100
	
	I0818 19:52:25.611181   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:25.613818   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.614177   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:25.614198   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.614389   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHPort
	I0818 19:52:25.614577   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:25.614743   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:25.614890   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHUsername
	I0818 19:52:25.615079   56105 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:25.615273   56105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.46 22 <nil> <nil>}
	I0818 19:52:25.615289   56105 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-147100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-147100/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-147100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 19:52:25.724916   56105 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:52:25.724949   56105 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 19:52:25.724985   56105 buildroot.go:174] setting up certificates
	I0818 19:52:25.724998   56105 provision.go:84] configureAuth start
	I0818 19:52:25.725010   56105 main.go:141] libmachine: (pause-147100) Calling .GetMachineName
	I0818 19:52:25.725363   56105 main.go:141] libmachine: (pause-147100) Calling .GetIP
	I0818 19:52:25.728722   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.729134   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:25.729163   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.729329   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:25.731945   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.732328   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:25.732355   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.732514   56105 provision.go:143] copyHostCerts
	I0818 19:52:25.732564   56105 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 19:52:25.732579   56105 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:52:25.732628   56105 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 19:52:25.732785   56105 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 19:52:25.732803   56105 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:52:25.732833   56105 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 19:52:25.732917   56105 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 19:52:25.732927   56105 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:52:25.732954   56105 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 19:52:25.733041   56105 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.pause-147100 san=[127.0.0.1 192.168.50.46 localhost minikube pause-147100]
	I0818 19:52:25.912291   56105 provision.go:177] copyRemoteCerts
	I0818 19:52:25.912350   56105 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 19:52:25.912373   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:25.915203   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.915544   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:25.915592   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:25.915800   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHPort
	I0818 19:52:25.915988   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:25.916161   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHUsername
	I0818 19:52:25.916308   56105 sshutil.go:53] new ssh client: &{IP:192.168.50.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/pause-147100/id_rsa Username:docker}
	I0818 19:52:25.998604   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 19:52:26.030343   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 19:52:26.060848   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 19:52:26.088144   56105 provision.go:87] duration metric: took 363.123433ms to configureAuth
	I0818 19:52:26.088181   56105 buildroot.go:189] setting minikube options for container-runtime
	I0818 19:52:26.088410   56105 config.go:182] Loaded profile config "pause-147100": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:52:26.088527   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:26.091350   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:26.091702   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:26.091735   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:26.091892   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHPort
	I0818 19:52:26.092055   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:26.092189   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:26.092353   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHUsername
	I0818 19:52:26.092490   56105 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:26.092655   56105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.46 22 <nil> <nil>}
	I0818 19:52:26.092668   56105 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 19:52:31.677788   56105 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 19:52:31.677820   56105 machine.go:96] duration metric: took 6.305019551s to provisionDockerMachine
	I0818 19:52:31.677834   56105 start.go:293] postStartSetup for "pause-147100" (driver="kvm2")
	I0818 19:52:31.677846   56105 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 19:52:31.677864   56105 main.go:141] libmachine: (pause-147100) Calling .DriverName
	I0818 19:52:31.678229   56105 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 19:52:31.678262   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:31.681548   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.681977   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:31.682008   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.682195   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHPort
	I0818 19:52:31.682392   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:31.682582   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHUsername
	I0818 19:52:31.682712   56105 sshutil.go:53] new ssh client: &{IP:192.168.50.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/pause-147100/id_rsa Username:docker}
	I0818 19:52:31.767219   56105 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 19:52:31.772104   56105 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 19:52:31.772132   56105 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 19:52:31.772202   56105 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 19:52:31.772320   56105 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 19:52:31.772456   56105 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 19:52:31.782517   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:52:31.814019   56105 start.go:296] duration metric: took 136.16998ms for postStartSetup
	I0818 19:52:31.814069   56105 fix.go:56] duration metric: took 6.465792979s for fixHost
	I0818 19:52:31.814092   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:31.817105   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.817454   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:31.817478   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.817640   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHPort
	I0818 19:52:31.817861   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:31.818057   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:31.818211   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHUsername
	I0818 19:52:31.818388   56105 main.go:141] libmachine: Using SSH client type: native
	I0818 19:52:31.818609   56105 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.46 22 <nil> <nil>}
	I0818 19:52:31.818625   56105 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 19:52:31.929111   56105 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724010751.917902026
	
	I0818 19:52:31.929142   56105 fix.go:216] guest clock: 1724010751.917902026
	I0818 19:52:31.929152   56105 fix.go:229] Guest: 2024-08-18 19:52:31.917902026 +0000 UTC Remote: 2024-08-18 19:52:31.81407331 +0000 UTC m=+32.427007185 (delta=103.828716ms)
	I0818 19:52:31.929193   56105 fix.go:200] guest clock delta is within tolerance: 103.828716ms
	I0818 19:52:31.929201   56105 start.go:83] releasing machines lock for "pause-147100", held for 6.580972019s
	I0818 19:52:31.929235   56105 main.go:141] libmachine: (pause-147100) Calling .DriverName
	I0818 19:52:31.929528   56105 main.go:141] libmachine: (pause-147100) Calling .GetIP
	I0818 19:52:31.932804   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.933249   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:31.933297   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.933525   56105 main.go:141] libmachine: (pause-147100) Calling .DriverName
	I0818 19:52:31.934050   56105 main.go:141] libmachine: (pause-147100) Calling .DriverName
	I0818 19:52:31.934257   56105 main.go:141] libmachine: (pause-147100) Calling .DriverName
	I0818 19:52:31.934369   56105 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 19:52:31.934426   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:31.934478   56105 ssh_runner.go:195] Run: cat /version.json
	I0818 19:52:31.934524   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHHostname
	I0818 19:52:31.937426   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.937454   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.937794   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:31.937813   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.937837   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:31.937854   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:31.938015   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHPort
	I0818 19:52:31.938127   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHPort
	I0818 19:52:31.938183   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:31.938416   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHUsername
	I0818 19:52:31.938422   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHKeyPath
	I0818 19:52:31.938604   56105 main.go:141] libmachine: (pause-147100) Calling .GetSSHUsername
	I0818 19:52:31.938618   56105 sshutil.go:53] new ssh client: &{IP:192.168.50.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/pause-147100/id_rsa Username:docker}
	I0818 19:52:31.938728   56105 sshutil.go:53] new ssh client: &{IP:192.168.50.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/pause-147100/id_rsa Username:docker}
	I0818 19:52:32.021639   56105 ssh_runner.go:195] Run: systemctl --version
	I0818 19:52:32.044149   56105 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 19:52:32.216840   56105 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 19:52:32.224886   56105 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 19:52:32.224948   56105 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 19:52:32.235920   56105 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0818 19:52:32.235943   56105 start.go:495] detecting cgroup driver to use...
	I0818 19:52:32.236003   56105 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 19:52:32.258497   56105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 19:52:32.273583   56105 docker.go:217] disabling cri-docker service (if available) ...
	I0818 19:52:32.273644   56105 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 19:52:32.290572   56105 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 19:52:32.306056   56105 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 19:52:32.467340   56105 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 19:52:32.627943   56105 docker.go:233] disabling docker service ...
	I0818 19:52:32.628015   56105 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 19:52:32.649262   56105 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 19:52:32.669581   56105 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 19:52:32.815905   56105 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 19:52:32.948564   56105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 19:52:32.964215   56105 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 19:52:32.984114   56105 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 19:52:32.984198   56105 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:32.995572   56105 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 19:52:32.995633   56105 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:33.006157   56105 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:33.016509   56105 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:33.028047   56105 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 19:52:33.038755   56105 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:33.052549   56105 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:33.066583   56105 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:52:33.076874   56105 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 19:52:33.086571   56105 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 19:52:33.095820   56105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:52:33.231783   56105 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 19:52:35.469145   56105 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.23732974s)
	I0818 19:52:35.469176   56105 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 19:52:35.469228   56105 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 19:52:35.475081   56105 start.go:563] Will wait 60s for crictl version
	I0818 19:52:35.475136   56105 ssh_runner.go:195] Run: which crictl
	I0818 19:52:35.479287   56105 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 19:52:35.528020   56105 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 19:52:35.528103   56105 ssh_runner.go:195] Run: crio --version
	I0818 19:52:35.565717   56105 ssh_runner.go:195] Run: crio --version
	I0818 19:52:35.603261   56105 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 19:52:35.604758   56105 main.go:141] libmachine: (pause-147100) Calling .GetIP
	I0818 19:52:35.608117   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:35.608607   56105 main.go:141] libmachine: (pause-147100) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:a2:d8", ip: ""} in network mk-pause-147100: {Iface:virbr4 ExpiryTime:2024-08-18 20:50:59 +0000 UTC Type:0 Mac:52:54:00:a8:a2:d8 Iaid: IPaddr:192.168.50.46 Prefix:24 Hostname:pause-147100 Clientid:01:52:54:00:a8:a2:d8}
	I0818 19:52:35.608631   56105 main.go:141] libmachine: (pause-147100) DBG | domain pause-147100 has defined IP address 192.168.50.46 and MAC address 52:54:00:a8:a2:d8 in network mk-pause-147100
	I0818 19:52:35.608889   56105 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 19:52:35.614691   56105 kubeadm.go:883] updating cluster {Name:pause-147100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:pause-147100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 19:52:35.614849   56105 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 19:52:35.614907   56105 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:52:35.671029   56105 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:52:35.671054   56105 crio.go:433] Images already preloaded, skipping extraction
	I0818 19:52:35.671111   56105 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:52:35.800491   56105 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 19:52:35.800521   56105 cache_images.go:84] Images are preloaded, skipping loading
	I0818 19:52:35.800531   56105 kubeadm.go:934] updating node { 192.168.50.46 8443 v1.31.0 crio true true} ...
	I0818 19:52:35.800676   56105 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-147100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:pause-147100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 19:52:35.800818   56105 ssh_runner.go:195] Run: crio config
	I0818 19:52:36.005285   56105 cni.go:84] Creating CNI manager for ""
	I0818 19:52:36.005313   56105 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:52:36.005325   56105 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 19:52:36.005354   56105 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.46 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-147100 NodeName:pause-147100 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 19:52:36.005582   56105 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-147100"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.46
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.46"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 19:52:36.005663   56105 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 19:52:36.094866   56105 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 19:52:36.094947   56105 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 19:52:36.174668   56105 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0818 19:52:36.249687   56105 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 19:52:36.349568   56105 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0818 19:52:36.483954   56105 ssh_runner.go:195] Run: grep 192.168.50.46	control-plane.minikube.internal$ /etc/hosts
	I0818 19:52:36.494320   56105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:52:36.752047   56105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:52:36.775725   56105 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/pause-147100 for IP: 192.168.50.46
	I0818 19:52:36.775794   56105 certs.go:194] generating shared ca certs ...
	I0818 19:52:36.775828   56105 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:52:36.776037   56105 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 19:52:36.776129   56105 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 19:52:36.776155   56105 certs.go:256] generating profile certs ...
	I0818 19:52:36.776296   56105 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/pause-147100/client.key
	I0818 19:52:36.776424   56105 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/pause-147100/apiserver.key.55c36723
	I0818 19:52:36.776498   56105 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/pause-147100/proxy-client.key
	I0818 19:52:36.776662   56105 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 19:52:36.776720   56105 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 19:52:36.776742   56105 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 19:52:36.776785   56105 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 19:52:36.776829   56105 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 19:52:36.776867   56105 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 19:52:36.776932   56105 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:52:36.777772   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 19:52:36.818755   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 19:52:36.880457   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 19:52:36.929652   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 19:52:36.968702   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/pause-147100/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0818 19:52:36.998211   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/pause-147100/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 19:52:37.031701   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/pause-147100/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 19:52:37.066303   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/pause-147100/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 19:52:37.100603   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 19:52:37.126560   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 19:52:37.157861   56105 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 19:52:37.228638   56105 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 19:52:37.260744   56105 ssh_runner.go:195] Run: openssl version
	I0818 19:52:37.273208   56105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 19:52:37.295658   56105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 19:52:37.302934   56105 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:52:37.303000   56105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 19:52:37.309903   56105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 19:52:37.323700   56105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 19:52:37.343422   56105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 19:52:37.354224   56105 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:52:37.354295   56105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 19:52:37.361890   56105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 19:52:37.375597   56105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 19:52:37.404064   56105 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:52:37.413462   56105 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:52:37.413526   56105 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:52:37.421933   56105 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 19:52:37.437011   56105 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:52:37.443112   56105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 19:52:37.452237   56105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 19:52:37.458985   56105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 19:52:37.466932   56105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 19:52:37.475747   56105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 19:52:37.482858   56105 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 19:52:37.490596   56105 kubeadm.go:392] StartCluster: {Name:pause-147100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:pause-147100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.46 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:52:37.490784   56105 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 19:52:37.490835   56105 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 19:52:37.557160   56105 cri.go:89] found id: "10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c"
	I0818 19:52:37.557184   56105 cri.go:89] found id: "f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f"
	I0818 19:52:37.557189   56105 cri.go:89] found id: "eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765"
	I0818 19:52:37.557194   56105 cri.go:89] found id: "5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c"
	I0818 19:52:37.557198   56105 cri.go:89] found id: "b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee"
	I0818 19:52:37.557203   56105 cri.go:89] found id: "84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72"
	I0818 19:52:37.557206   56105 cri.go:89] found id: "141342fd884e31cc09542447675b36664b5e0a08d2ef2057cdfe6ef92b720522"
	I0818 19:52:37.557213   56105 cri.go:89] found id: "103636fb59676d235aab335b4269d55805908888e65937767d53331e647b4ecb"
	I0818 19:52:37.557216   56105 cri.go:89] found id: "664a384ca1027f45b80ba3fc7c1321adc8f00b1018df361215c87f40edd74dd5"
	I0818 19:52:37.557225   56105 cri.go:89] found id: "28e29b9199ad625fbf8fd11bb921650afe0b660858b27fa92f159262c7ea02d3"
	I0818 19:52:37.557229   56105 cri.go:89] found id: "f80de4bab9ac3e31419774553a76ee60471aa3c59fbbc0a97a979ab04e671665"
	I0818 19:52:37.557233   56105 cri.go:89] found id: "1771a11ce8673cfed81f3be68243c1686aa240386cb0778037d7a7fcea6e67c5"
	I0818 19:52:37.557237   56105 cri.go:89] found id: ""
	I0818 19:52:37.557289   56105 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-147100 -n pause-147100
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-147100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-147100 logs -n 25: (1.396878197s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-735899             | cert-expiration-735899    | jenkins | v1.33.1 | 18 Aug 24 19:49 UTC | 18 Aug 24 19:50 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-433596          | force-systemd-flag-433596 | jenkins | v1.33.1 | 18 Aug 24 19:49 UTC | 18 Aug 24 19:50 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:49 UTC | 18 Aug 24 19:50 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-319765             | running-upgrade-319765    | jenkins | v1.33.1 | 18 Aug 24 19:49 UTC | 18 Aug 24 19:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:50 UTC |
	| start   | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:50 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-433596 ssh cat     | force-systemd-flag-433596 | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:50 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-433596          | force-systemd-flag-433596 | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:50 UTC |
	| start   | -p pause-147100 --memory=2048         | pause-147100              | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:51 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-288448 sudo           | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:51 UTC |
	| start   | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:51 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-319765             | running-upgrade-319765    | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:51 UTC |
	| start   | -p cert-options-272048                | cert-options-272048       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:52 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-288448 sudo           | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:51 UTC |
	| start   | -p kubernetes-upgrade-179876          | kubernetes-upgrade-179876 | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-147100                       | pause-147100              | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:53 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-272048 ssh               | cert-options-272048       | jenkins | v1.33.1 | 18 Aug 24 19:52 UTC | 18 Aug 24 19:52 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-272048 -- sudo        | cert-options-272048       | jenkins | v1.33.1 | 18 Aug 24 19:52 UTC | 18 Aug 24 19:52 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-272048                | cert-options-272048       | jenkins | v1.33.1 | 18 Aug 24 19:52 UTC | 18 Aug 24 19:52 UTC |
	| start   | -p stopped-upgrade-729585             | minikube                  | jenkins | v1.26.0 | 18 Aug 24 19:52 UTC | 18 Aug 24 19:53 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p cert-expiration-735899             | cert-expiration-735899    | jenkins | v1.33.1 | 18 Aug 24 19:53 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-729585 stop           | minikube                  | jenkins | v1.26.0 | 18 Aug 24 19:53 UTC | 18 Aug 24 19:53 UTC |
	| start   | -p stopped-upgrade-729585             | stopped-upgrade-729585    | jenkins | v1.33.1 | 18 Aug 24 19:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 19:53:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 19:53:23.352595   57130 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:53:23.352725   57130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:53:23.352733   57130 out.go:358] Setting ErrFile to fd 2...
	I0818 19:53:23.352737   57130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:53:23.352943   57130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:53:23.353426   57130 out.go:352] Setting JSON to false
	I0818 19:53:23.354383   57130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5747,"bootTime":1724005056,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:53:23.354457   57130 start.go:139] virtualization: kvm guest
	I0818 19:53:23.356711   57130 out.go:177] * [stopped-upgrade-729585] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:53:23.358425   57130 notify.go:220] Checking for updates...
	I0818 19:53:23.358433   57130 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:53:23.360045   57130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:53:23.361460   57130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:53:23.362627   57130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:53:23.363838   57130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:53:23.364967   57130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:53:20.156793   56105 addons.go:510] duration metric: took 2.565819ms for enable addons: enabled=[]
	I0818 19:53:20.156825   56105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:53:20.322236   56105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:53:20.338197   56105 node_ready.go:35] waiting up to 6m0s for node "pause-147100" to be "Ready" ...
	I0818 19:53:20.341313   56105 node_ready.go:49] node "pause-147100" has status "Ready":"True"
	I0818 19:53:20.341336   56105 node_ready.go:38] duration metric: took 3.102633ms for node "pause-147100" to be "Ready" ...
	I0818 19:53:20.341346   56105 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 19:53:20.347299   56105 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4whpz" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:20.516097   56105 pod_ready.go:93] pod "coredns-6f6b679f8f-4whpz" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:20.516130   56105 pod_ready.go:82] duration metric: took 168.807538ms for pod "coredns-6f6b679f8f-4whpz" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:20.516143   56105 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:20.915861   56105 pod_ready.go:93] pod "etcd-pause-147100" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:20.915893   56105 pod_ready.go:82] duration metric: took 399.741616ms for pod "etcd-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:20.915906   56105 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:21.315962   56105 pod_ready.go:93] pod "kube-apiserver-pause-147100" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:21.315986   56105 pod_ready.go:82] duration metric: took 400.072219ms for pod "kube-apiserver-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:21.315997   56105 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:21.715514   56105 pod_ready.go:93] pod "kube-controller-manager-pause-147100" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:21.715540   56105 pod_ready.go:82] duration metric: took 399.535794ms for pod "kube-controller-manager-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:21.715554   56105 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rnm6w" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:22.115778   56105 pod_ready.go:93] pod "kube-proxy-rnm6w" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:22.115804   56105 pod_ready.go:82] duration metric: took 400.24263ms for pod "kube-proxy-rnm6w" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:22.115815   56105 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:22.515960   56105 pod_ready.go:93] pod "kube-scheduler-pause-147100" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:22.515989   56105 pod_ready.go:82] duration metric: took 400.165664ms for pod "kube-scheduler-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:22.516002   56105 pod_ready.go:39] duration metric: took 2.174644643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 19:53:22.516018   56105 api_server.go:52] waiting for apiserver process to appear ...
	I0818 19:53:22.516072   56105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:53:22.531297   56105 api_server.go:72] duration metric: took 2.377359991s to wait for apiserver process to appear ...
	I0818 19:53:22.531325   56105 api_server.go:88] waiting for apiserver healthz status ...
	I0818 19:53:22.531340   56105 api_server.go:253] Checking apiserver healthz at https://192.168.50.46:8443/healthz ...
	I0818 19:53:22.538312   56105 api_server.go:279] https://192.168.50.46:8443/healthz returned 200:
	ok
	I0818 19:53:22.539472   56105 api_server.go:141] control plane version: v1.31.0
	I0818 19:53:22.539495   56105 api_server.go:131] duration metric: took 8.162382ms to wait for apiserver health ...
	I0818 19:53:22.539504   56105 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 19:53:22.717995   56105 system_pods.go:59] 6 kube-system pods found
	I0818 19:53:22.718023   56105 system_pods.go:61] "coredns-6f6b679f8f-4whpz" [702cffef-e838-46b6-b925-639696e50224] Running
	I0818 19:53:22.718028   56105 system_pods.go:61] "etcd-pause-147100" [23f2639b-cc4c-478b-ba2a-ccb2526a2311] Running
	I0818 19:53:22.718031   56105 system_pods.go:61] "kube-apiserver-pause-147100" [2ba8fe66-ebc7-4203-936f-d1602287c32f] Running
	I0818 19:53:22.718034   56105 system_pods.go:61] "kube-controller-manager-pause-147100" [3973f8c2-6ff5-4339-9ddd-8693926d148a] Running
	I0818 19:53:22.718038   56105 system_pods.go:61] "kube-proxy-rnm6w" [94667bc5-72ce-4d1b-b0a3-e4160989b677] Running
	I0818 19:53:22.718041   56105 system_pods.go:61] "kube-scheduler-pause-147100" [3bba113f-c0fb-4940-abd4-34c41dcc06b2] Running
	I0818 19:53:22.718048   56105 system_pods.go:74] duration metric: took 178.537628ms to wait for pod list to return data ...
	I0818 19:53:22.718056   56105 default_sa.go:34] waiting for default service account to be created ...
	I0818 19:53:22.915550   56105 default_sa.go:45] found service account: "default"
	I0818 19:53:22.915580   56105 default_sa.go:55] duration metric: took 197.517182ms for default service account to be created ...
	I0818 19:53:22.915593   56105 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 19:53:23.117036   56105 system_pods.go:86] 6 kube-system pods found
	I0818 19:53:23.117065   56105 system_pods.go:89] "coredns-6f6b679f8f-4whpz" [702cffef-e838-46b6-b925-639696e50224] Running
	I0818 19:53:23.117073   56105 system_pods.go:89] "etcd-pause-147100" [23f2639b-cc4c-478b-ba2a-ccb2526a2311] Running
	I0818 19:53:23.117078   56105 system_pods.go:89] "kube-apiserver-pause-147100" [2ba8fe66-ebc7-4203-936f-d1602287c32f] Running
	I0818 19:53:23.117084   56105 system_pods.go:89] "kube-controller-manager-pause-147100" [3973f8c2-6ff5-4339-9ddd-8693926d148a] Running
	I0818 19:53:23.117088   56105 system_pods.go:89] "kube-proxy-rnm6w" [94667bc5-72ce-4d1b-b0a3-e4160989b677] Running
	I0818 19:53:23.117093   56105 system_pods.go:89] "kube-scheduler-pause-147100" [3bba113f-c0fb-4940-abd4-34c41dcc06b2] Running
	I0818 19:53:23.117101   56105 system_pods.go:126] duration metric: took 201.502281ms to wait for k8s-apps to be running ...
	I0818 19:53:23.117110   56105 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 19:53:23.117162   56105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:53:23.131914   56105 system_svc.go:56] duration metric: took 14.798743ms WaitForService to wait for kubelet
	I0818 19:53:23.131942   56105 kubeadm.go:582] duration metric: took 2.978011129s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:53:23.131966   56105 node_conditions.go:102] verifying NodePressure condition ...
	I0818 19:53:23.316715   56105 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 19:53:23.316736   56105 node_conditions.go:123] node cpu capacity is 2
	I0818 19:53:23.316747   56105 node_conditions.go:105] duration metric: took 184.775579ms to run NodePressure ...
	I0818 19:53:23.316757   56105 start.go:241] waiting for startup goroutines ...
	I0818 19:53:23.316763   56105 start.go:246] waiting for cluster config update ...
	I0818 19:53:23.316770   56105 start.go:255] writing updated cluster config ...
	I0818 19:53:23.317013   56105 ssh_runner.go:195] Run: rm -f paused
	I0818 19:53:23.370739   56105 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 19:53:23.372398   56105 out.go:177] * Done! kubectl is now configured to use "pause-147100" cluster and "default" namespace by default
	I0818 19:53:23.366354   57130 config.go:182] Loaded profile config "stopped-upgrade-729585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0818 19:53:23.366736   57130 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:53:23.366773   57130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:53:23.384716   57130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I0818 19:53:23.385225   57130 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:53:23.385736   57130 main.go:141] libmachine: Using API Version  1
	I0818 19:53:23.385757   57130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:53:23.386160   57130 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:53:23.386376   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .DriverName
	I0818 19:53:23.387887   57130 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0818 19:53:23.388982   57130 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:53:23.389404   57130 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:53:23.389448   57130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:53:23.406972   57130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36357
	I0818 19:53:23.407638   57130 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:53:23.408243   57130 main.go:141] libmachine: Using API Version  1
	I0818 19:53:23.408268   57130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:53:23.408644   57130 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:53:23.408836   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .DriverName
	I0818 19:53:23.447095   57130 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 19:53:23.448224   57130 start.go:297] selected driver: kvm2
	I0818 19:53:23.448243   57130 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-729585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-729
585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.168 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 19:53:23.448374   57130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:53:23.449337   57130 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:53:23.449406   57130 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:53:23.469629   57130 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:53:23.470072   57130 cni.go:84] Creating CNI manager for ""
	I0818 19:53:23.470096   57130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:53:23.470163   57130 start.go:340] cluster config:
	{Name:stopped-upgrade-729585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-729585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.168 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 19:53:23.470298   57130 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:53:23.473099   57130 out.go:177] * Starting "stopped-upgrade-729585" primary control-plane node in "stopped-upgrade-729585" cluster
	I0818 19:53:23.474263   57130 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0818 19:53:23.474307   57130 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0818 19:53:23.474316   57130 cache.go:56] Caching tarball of preloaded images
	I0818 19:53:23.474420   57130 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:53:23.474436   57130 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0818 19:53:23.474556   57130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/stopped-upgrade-729585/config.json ...
	I0818 19:53:23.474804   57130 start.go:360] acquireMachinesLock for stopped-upgrade-729585: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:53:23.474861   57130 start.go:364] duration metric: took 36.126µs to acquireMachinesLock for "stopped-upgrade-729585"
	I0818 19:53:23.474888   57130 start.go:96] Skipping create...Using existing machine configuration
	I0818 19:53:23.474897   57130 fix.go:54] fixHost starting: 
	I0818 19:53:23.475173   57130 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:53:23.475214   57130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:53:23.489801   57130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0818 19:53:23.490176   57130 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:53:23.490641   57130 main.go:141] libmachine: Using API Version  1
	I0818 19:53:23.490658   57130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:53:23.491005   57130 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:53:23.491200   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .DriverName
	I0818 19:53:23.491355   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .GetState
	I0818 19:53:23.492827   57130 fix.go:112] recreateIfNeeded on stopped-upgrade-729585: state=Stopped err=<nil>
	I0818 19:53:23.492863   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .DriverName
	W0818 19:53:23.493026   57130 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 19:53:23.494924   57130 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-729585" ...
	
	
	==> CRI-O <==
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.065576742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a780f2e-274b-42a4-b286-1f07027d8d93 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.066919796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da58a8e6-d30e-40fb-b793-8710626e20b3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.067503878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010804067472745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da58a8e6-d30e-40fb-b793-8710626e20b3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.068231293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01a58fe2-f2d6-4d38-9892-6c1e05b15682 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.068304724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01a58fe2-f2d6-4d38-9892-6c1e05b15682 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.068724337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724010784417835499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724010784407766032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724010780578025262,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c
6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724010780570715448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724010780560227336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4
bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724010780546885438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724010757146200285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724010756297121099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724010756225931513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause
-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724010756284405478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724010756133022541,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 27ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724010756028228221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: cafb8b2421c6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01a58fe2-f2d6-4d38-9892-6c1e05b15682 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.129107270Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2951659-6189-42ea-a4c2-80ff1ac9159a name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.129411657Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-4whpz,Uid:702cffef-e838-46b6-b925-639696e50224,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755970302335,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:51:28.374786206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&PodSandboxMetadata{Name:etcd-pause-147100,Uid:10b595936c2b444591befdf953e08d34,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1724010755914472341,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.46:2379,kubernetes.io/config.hash: 10b595936c2b444591befdf953e08d34,kubernetes.io/config.seen: 2024-08-18T19:51:23.331389763Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-147100,Uid:36455a54211e341dc9a4bd1cebfe81f4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755914127222,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 36455a54211e341dc9a4bd1cebfe81f4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.46:8443,kubernetes.io/config.hash: 36455a54211e341dc9a4bd1cebfe81f4,kubernetes.io/config.seen: 2024-08-18T19:51:23.331395920Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&PodSandboxMetadata{Name:kube-proxy-rnm6w,Uid:94667bc5-72ce-4d1b-b0a3-e4160989b677,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755764638430,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:51:28.008777345Z,kubernetes.io/config.source: api,},RuntimeHan
dler:,},&PodSandbox{Id:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-147100,Uid:27ea1fe9c03fccf5ef1ae77bc408f564,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755754300399,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ea1fe9c03fccf5ef1ae77bc408f564,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 27ea1fe9c03fccf5ef1ae77bc408f564,kubernetes.io/config.seen: 2024-08-18T19:51:23.331397476Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-147100,Uid:cafb8b2421c6c891904bf1e9c4348c24,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755722302058,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c6c891904bf1e9c4348c24,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cafb8b2421c6c891904bf1e9c4348c24,kubernetes.io/config.seen: 2024-08-18T19:51:23.331399106Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b2951659-6189-42ea-a4c2-80ff1ac9159a name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.130019049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9053e88-1f7d-4ca0-9be3-e7f23dc2ebf3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.130158894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9053e88-1f7d-4ca0-9be3-e7f23dc2ebf3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.130380363Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724010784417835499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724010784407766032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724010780578025262,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c
6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724010780570715448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724010780560227336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4
bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724010780546885438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9053e88-1f7d-4ca0-9be3-e7f23dc2ebf3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.132468026Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f2185ba-5264-496d-adee-5467c1cb26e1 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.132529849Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f2185ba-5264-496d-adee-5467c1cb26e1 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.133798528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4c1f23c-e160-43b1-aade-60f5b718a24b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.134831696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010804134807731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4c1f23c-e160-43b1-aade-60f5b718a24b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.135862573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25712780-ef78-40f8-8b36-d7d5021517c8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.135912014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25712780-ef78-40f8-8b36-d7d5021517c8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.136133175Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724010784417835499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724010784407766032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724010780578025262,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c
6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724010780570715448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724010780560227336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4
bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724010780546885438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724010757146200285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724010756297121099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724010756225931513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause
-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724010756284405478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724010756133022541,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 27ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724010756028228221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: cafb8b2421c6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25712780-ef78-40f8-8b36-d7d5021517c8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.185216780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=720dd846-c637-4c83-96c5-a39c8fc80863 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.185415454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=720dd846-c637-4c83-96c5-a39c8fc80863 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.187741891Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d790e14e-c4b9-493e-b7b2-7eb8a9794f1f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.188599485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010804188563271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d790e14e-c4b9-493e-b7b2-7eb8a9794f1f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.189507477Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7952f39d-ee92-40ed-8df2-0272cad6afe3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.189582999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7952f39d-ee92-40ed-8df2-0272cad6afe3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:24 pause-147100 crio[2322]: time="2024-08-18 19:53:24.189918476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724010784417835499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724010784407766032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724010780578025262,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c
6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724010780570715448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724010780560227336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4
bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724010780546885438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724010757146200285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724010756297121099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724010756225931513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause
-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724010756284405478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724010756133022541,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 27ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724010756028228221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: cafb8b2421c6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7952f39d-ee92-40ed-8df2-0272cad6afe3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef6fc3b6c7222       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   19 seconds ago      Running             kube-proxy                2                   a14ab7f71df89       kube-proxy-rnm6w
	ef36d36ec67dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago      Running             coredns                   2                   1890a54271e90       coredns-6f6b679f8f-4whpz
	93d4247e182f4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   23 seconds ago      Running             kube-scheduler            2                   0add04a2ce4e3       kube-scheduler-pause-147100
	08fa87a8eddc2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   23 seconds ago      Running             kube-controller-manager   2                   3dd52943504ff       kube-controller-manager-pause-147100
	bb12c02a28f97       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   23 seconds ago      Running             kube-apiserver            2                   dd73d48408569       kube-apiserver-pause-147100
	226f588a112ea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago      Running             etcd                      2                   facc15b44d0ce       etcd-pause-147100
	10b4e800c2a3d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   47 seconds ago      Exited              coredns                   1                   1890a54271e90       coredns-6f6b679f8f-4whpz
	f5626839dea18       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   47 seconds ago      Exited              kube-proxy                1                   a14ab7f71df89       kube-proxy-rnm6w
	eeaacf0c08f87       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   47 seconds ago      Exited              etcd                      1                   facc15b44d0ce       etcd-pause-147100
	5d7e54722e292       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   48 seconds ago      Exited              kube-apiserver            1                   dd73d48408569       kube-apiserver-pause-147100
	b9fdae3b3d9f7       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   48 seconds ago      Exited              kube-controller-manager   1                   3dd52943504ff       kube-controller-manager-pause-147100
	84ecf3a275a0e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   48 seconds ago      Exited              kube-scheduler            1                   0add04a2ce4e3       kube-scheduler-pause-147100
	
	
	==> coredns [10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54860 - 62269 "HINFO IN 538777352292154669.2928990509908764833. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013916586s
	
	
	==> coredns [ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59596 - 37857 "HINFO IN 6015369515724236624.7058985208755670938. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017176011s
	
	
	==> describe nodes <==
	Name:               pause-147100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-147100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=pause-147100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T19_51_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:51:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-147100
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:53:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:53:03 +0000   Sun, 18 Aug 2024 19:51:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:53:03 +0000   Sun, 18 Aug 2024 19:51:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:53:03 +0000   Sun, 18 Aug 2024 19:51:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:53:03 +0000   Sun, 18 Aug 2024 19:51:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.46
	  Hostname:    pause-147100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 066b47dfa6b44c869a948b1fd1b30c1d
	  System UUID:                066b47df-a6b4-4c86-9a94-8b1fd1b30c1d
	  Boot ID:                    c4c9de30-34c4-4cff-bc08-cafb0a689fcc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-4whpz                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     116s
	  kube-system                 etcd-pause-147100                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m1s
	  kube-system                 kube-apiserver-pause-147100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-pause-147100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-rnm6w                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-pause-147100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 115s               kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 44s                kube-proxy       
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node pause-147100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node pause-147100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node pause-147100 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m                 kubelet          Node pause-147100 status is now: NodeReady
	  Normal  RegisteredNode           117s               node-controller  Node pause-147100 event: Registered Node pause-147100 in Controller
	  Normal  RegisteredNode           41s                node-controller  Node pause-147100 event: Registered Node pause-147100 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-147100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-147100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-147100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-147100 event: Registered Node pause-147100 in Controller
	
	
	==> dmesg <==
	[  +6.227822] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.058549] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057086] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.194420] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.125209] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.293394] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.138546] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +4.293772] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.063508] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.014725] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.088663] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.920769] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.118737] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.684592] kauditd_printk_skb: 106 callbacks suppressed
	[Aug18 19:52] systemd-fstab-generator[2240]: Ignoring "noauto" option for root device
	[  +0.167965] systemd-fstab-generator[2252]: Ignoring "noauto" option for root device
	[  +0.209541] systemd-fstab-generator[2266]: Ignoring "noauto" option for root device
	[  +0.128645] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.281135] systemd-fstab-generator[2306]: Ignoring "noauto" option for root device
	[  +3.437990] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +3.993546] kauditd_printk_skb: 195 callbacks suppressed
	[ +19.254395] systemd-fstab-generator[3303]: Ignoring "noauto" option for root device
	[Aug18 19:53] kauditd_printk_skb: 41 callbacks suppressed
	[  +7.072491] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.659588] systemd-fstab-generator[3750]: Ignoring "noauto" option for root device
	
	
	==> etcd [226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8] <==
	{"level":"info","ts":"2024-08-18T19:53:06.964393Z","caller":"traceutil/trace.go:171","msg":"trace[1376264623] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"257.705429ms","start":"2024-08-18T19:53:06.706668Z","end":"2024-08-18T19:53:06.964374Z","steps":["trace[1376264623] 'process raft request'  (duration: 128.735493ms)","trace[1376264623] 'compare'  (duration: 128.478586ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:06.964574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.380027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:discovery\" ","response":"range_response_count:1 size:694"}
	{"level":"info","ts":"2024-08-18T19:53:06.964707Z","caller":"traceutil/trace.go:171","msg":"trace[638292811] range","detail":"{range_begin:/registry/clusterrolebindings/system:discovery; range_end:; response_count:1; response_revision:487; }","duration":"257.520326ms","start":"2024-08-18T19:53:06.707176Z","end":"2024-08-18T19:53:06.964696Z","steps":["trace[638292811] 'agreement among raft nodes before linearized reading'  (duration: 257.311337ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:53:07.397566Z","caller":"traceutil/trace.go:171","msg":"trace[525141069] linearizableReadLoop","detail":"{readStateIndex:522; appliedIndex:521; }","duration":"425.8964ms","start":"2024-08-18T19:53:06.971651Z","end":"2024-08-18T19:53:07.397547Z","steps":["trace[525141069] 'read index received'  (duration: 341.415886ms)","trace[525141069] 'applied index is now lower than readState.Index'  (duration: 84.479495ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:53:07.397681Z","caller":"traceutil/trace.go:171","msg":"trace[1877656057] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"426.264706ms","start":"2024-08-18T19:53:06.971408Z","end":"2024-08-18T19:53:07.397672Z","steps":["trace[1877656057] 'process raft request'  (duration: 341.702874ms)","trace[1877656057] 'compare'  (duration: 84.141151ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:07.397758Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:53:06.971297Z","time spent":"426.407359ms","remote":"127.0.0.1:59246","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-147100.17eceaab9d4c48d0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-147100.17eceaab9d4c48d0\" value_size:592 lease:1281996915723541988 >> failure:<>"}
	{"level":"warn","ts":"2024-08-18T19:53:07.398041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.377104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:node-proxier\" ","response":"range_response_count:1 size:699"}
	{"level":"info","ts":"2024-08-18T19:53:07.398123Z","caller":"traceutil/trace.go:171","msg":"trace[1747235898] range","detail":"{range_begin:/registry/clusterrolebindings/system:node-proxier; range_end:; response_count:1; response_revision:488; }","duration":"426.440112ms","start":"2024-08-18T19:53:06.971647Z","end":"2024-08-18T19:53:07.398088Z","steps":["trace[1747235898] 'agreement among raft nodes before linearized reading'  (duration: 426.287675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:53:07.398156Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:53:06.971622Z","time spent":"426.526148ms","remote":"127.0.0.1:59490","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":722,"request content":"key:\"/registry/clusterrolebindings/system:node-proxier\" "}
	{"level":"warn","ts":"2024-08-18T19:53:07.398175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.779083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T19:53:07.398283Z","caller":"traceutil/trace.go:171","msg":"trace[1845704875] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:488; }","duration":"105.893114ms","start":"2024-08-18T19:53:07.292377Z","end":"2024-08-18T19:53:07.398270Z","steps":["trace[1845704875] 'agreement among raft nodes before linearized reading'  (duration: 105.669704ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:53:07.608665Z","caller":"traceutil/trace.go:171","msg":"trace[42275576] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:523; }","duration":"124.795386ms","start":"2024-08-18T19:53:07.483856Z","end":"2024-08-18T19:53:07.608651Z","steps":["trace[42275576] 'read index received'  (duration: 54.936836ms)","trace[42275576] 'applied index is now lower than readState.Index'  (duration: 69.85815ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:07.608796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.921939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:node\" ","response":"range_response_count:1 size:603"}
	{"level":"info","ts":"2024-08-18T19:53:07.608821Z","caller":"traceutil/trace.go:171","msg":"trace[840061158] range","detail":"{range_begin:/registry/clusterrolebindings/system:node; range_end:; response_count:1; response_revision:490; }","duration":"124.960652ms","start":"2024-08-18T19:53:07.483852Z","end":"2024-08-18T19:53:07.608812Z","steps":["trace[840061158] 'agreement among raft nodes before linearized reading'  (duration: 124.871908ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:53:07.608926Z","caller":"traceutil/trace.go:171","msg":"trace[2029602159] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"125.751814ms","start":"2024-08-18T19:53:07.483155Z","end":"2024-08-18T19:53:07.608907Z","steps":["trace[2029602159] 'process raft request'  (duration: 55.680545ms)","trace[2029602159] 'compare'  (duration: 69.746244ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:53:07.902979Z","caller":"traceutil/trace.go:171","msg":"trace[963842668] linearizableReadLoop","detail":"{readStateIndex:526; appliedIndex:525; }","duration":"269.4848ms","start":"2024-08-18T19:53:07.633478Z","end":"2024-08-18T19:53:07.902963Z","steps":["trace[963842668] 'read index received'  (duration: 210.549723ms)","trace[963842668] 'applied index is now lower than readState.Index'  (duration: 58.93444ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:07.903129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.634354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:endpoint-controller\" ","response":"range_response_count:1 size:751"}
	{"level":"info","ts":"2024-08-18T19:53:07.903157Z","caller":"traceutil/trace.go:171","msg":"trace[963649035] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:endpoint-controller; range_end:; response_count:1; response_revision:492; }","duration":"269.67303ms","start":"2024-08-18T19:53:07.633475Z","end":"2024-08-18T19:53:07.903149Z","steps":["trace[963649035] 'agreement among raft nodes before linearized reading'  (duration: 269.583079ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:53:07.903221Z","caller":"traceutil/trace.go:171","msg":"trace[1794095830] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"273.79839ms","start":"2024-08-18T19:53:07.629233Z","end":"2024-08-18T19:53:07.903031Z","steps":["trace[1794095830] 'process raft request'  (duration: 214.859589ms)","trace[1794095830] 'compare'  (duration: 58.784393ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:53:08.115927Z","caller":"traceutil/trace.go:171","msg":"trace[1732186174] linearizableReadLoop","detail":"{readStateIndex:528; appliedIndex:527; }","duration":"162.223877ms","start":"2024-08-18T19:53:07.953689Z","end":"2024-08-18T19:53:08.115913Z","steps":["trace[1732186174] 'read index received'  (duration: 96.044863ms)","trace[1732186174] 'applied index is now lower than readState.Index'  (duration: 66.178258ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:53:08.115945Z","caller":"traceutil/trace.go:171","msg":"trace[1926493322] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"163.130089ms","start":"2024-08-18T19:53:07.952793Z","end":"2024-08-18T19:53:08.115923Z","steps":["trace[1926493322] 'process raft request'  (duration: 96.916952ms)","trace[1926493322] 'compare'  (duration: 65.948881ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:08.116112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.479193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T19:53:08.116175Z","caller":"traceutil/trace.go:171","msg":"trace[205156128] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:494; }","duration":"124.531524ms","start":"2024-08-18T19:53:07.991628Z","end":"2024-08-18T19:53:08.116159Z","steps":["trace[205156128] 'agreement among raft nodes before linearized reading'  (duration: 124.462468ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:53:08.116064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.358136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:generic-garbage-collector\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2024-08-18T19:53:08.116378Z","caller":"traceutil/trace.go:171","msg":"trace[63068789] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:generic-garbage-collector; range_end:; response_count:1; response_revision:494; }","duration":"162.6831ms","start":"2024-08-18T19:53:07.953686Z","end":"2024-08-18T19:53:08.116369Z","steps":["trace[63068789] 'agreement among raft nodes before linearized reading'  (duration: 162.289519ms)"],"step_count":1}
	
	
	==> etcd [eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765] <==
	{"level":"info","ts":"2024-08-18T19:52:38.877583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-18T19:52:38.877638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca received MsgPreVoteResp from cfbc0c4dab0211ca at term 2"}
	{"level":"info","ts":"2024-08-18T19:52:38.877679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca became candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:52:38.877703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca received MsgVoteResp from cfbc0c4dab0211ca at term 3"}
	{"level":"info","ts":"2024-08-18T19:52:38.877733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca became leader at term 3"}
	{"level":"info","ts":"2024-08-18T19:52:38.877759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cfbc0c4dab0211ca elected leader cfbc0c4dab0211ca at term 3"}
	{"level":"info","ts":"2024-08-18T19:52:38.880706Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"cfbc0c4dab0211ca","local-member-attributes":"{Name:pause-147100 ClientURLs:[https://192.168.50.46:2379]}","request-path":"/0/members/cfbc0c4dab0211ca/attributes","cluster-id":"6ff72249b152fa27","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T19:52:38.880787Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:52:38.880942Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T19:52:38.880999Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T19:52:38.881019Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:52:38.881995Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:52:38.882053Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:52:38.882847Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.46:2379"}
	{"level":"info","ts":"2024-08-18T19:52:38.883014Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T19:52:47.713882Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-18T19:52:47.713966Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-147100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.46:2380"],"advertise-client-urls":["https://192.168.50.46:2379"]}
	{"level":"warn","ts":"2024-08-18T19:52:47.714035Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:52:47.714136Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:52:47.737097Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.46:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:52:47.737148Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.46:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:52:47.737202Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cfbc0c4dab0211ca","current-leader-member-id":"cfbc0c4dab0211ca"}
	{"level":"info","ts":"2024-08-18T19:52:47.740862Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.46:2380"}
	{"level":"info","ts":"2024-08-18T19:52:47.741098Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.46:2380"}
	{"level":"info","ts":"2024-08-18T19:52:47.741165Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-147100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.46:2380"],"advertise-client-urls":["https://192.168.50.46:2379"]}
	
	
	==> kernel <==
	 19:53:24 up 2 min,  0 users,  load average: 0.62, 0.34, 0.13
	Linux pause-147100 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c] <==
	W0818 19:52:56.866280       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:56.873870       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:56.883572       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:56.895269       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:56.991167       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.005993       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.006033       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.068149       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.207718       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.231628       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.250019       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.350056       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.351516       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.397377       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.421857       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.426694       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.441901       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.443282       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.451030       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.516712       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.616929       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.712137       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.712148       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.754792       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.774433       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7] <==
	I0818 19:53:03.768588       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 19:53:03.768812       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 19:53:03.769858       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 19:53:03.770569       1 aggregator.go:171] initial CRD sync complete...
	I0818 19:53:03.770622       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 19:53:03.770647       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 19:53:03.770670       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:53:03.786153       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 19:53:03.786695       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 19:53:03.786895       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0818 19:53:03.786966       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0818 19:53:03.790137       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0818 19:53:03.792356       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 19:53:03.807560       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 19:53:03.815505       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:53:03.815575       1 policy_source.go:224] refreshing policies
	I0818 19:53:03.849257       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 19:53:04.672563       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0818 19:53:08.886095       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0818 19:53:08.918182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0818 19:53:08.984133       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0818 19:53:09.047521       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0818 19:53:09.057964       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0818 19:53:11.524733       1 controller.go:615] quota admission added evaluator for: endpoints
	I0818 19:53:11.527238       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a] <==
	I0818 19:53:10.069450       1 shared_informer.go:320] Caches are synced for expand
	I0818 19:53:10.079082       1 shared_informer.go:320] Caches are synced for attach detach
	I0818 19:53:10.083938       1 shared_informer.go:320] Caches are synced for deployment
	I0818 19:53:10.084411       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0818 19:53:10.084786       1 shared_informer.go:320] Caches are synced for PVC protection
	I0818 19:53:10.084830       1 shared_informer.go:320] Caches are synced for HPA
	I0818 19:53:10.084868       1 shared_informer.go:320] Caches are synced for ephemeral
	I0818 19:53:10.086134       1 shared_informer.go:320] Caches are synced for daemon sets
	I0818 19:53:10.090210       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0818 19:53:10.091017       1 shared_informer.go:320] Caches are synced for endpoint
	I0818 19:53:10.099378       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0818 19:53:10.115387       1 shared_informer.go:320] Caches are synced for disruption
	I0818 19:53:10.117868       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0818 19:53:10.145277       1 shared_informer.go:320] Caches are synced for job
	I0818 19:53:10.152737       1 shared_informer.go:320] Caches are synced for cronjob
	I0818 19:53:10.184719       1 shared_informer.go:320] Caches are synced for stateful set
	I0818 19:53:10.233760       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0818 19:53:10.283723       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:53:10.292757       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:53:10.326237       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0818 19:53:10.710002       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:53:10.710039       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0818 19:53:10.745098       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:53:11.545197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="37.970333ms"
	I0818 19:53:11.545999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="340.577µs"
	
	
	==> kube-controller-manager [b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee] <==
	I0818 19:52:43.505752       1 shared_informer.go:320] Caches are synced for PVC protection
	I0818 19:52:43.509072       1 shared_informer.go:320] Caches are synced for node
	I0818 19:52:43.509134       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0818 19:52:43.509177       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0818 19:52:43.509198       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0818 19:52:43.509220       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0818 19:52:43.510545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-147100"
	I0818 19:52:43.513872       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0818 19:52:43.516662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.580081ms"
	I0818 19:52:43.516781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="65.233µs"
	I0818 19:52:43.554054       1 shared_informer.go:320] Caches are synced for attach detach
	I0818 19:52:43.606120       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0818 19:52:43.649244       1 shared_informer.go:320] Caches are synced for taint
	I0818 19:52:43.649642       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0818 19:52:43.649746       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-147100"
	I0818 19:52:43.649783       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0818 19:52:43.652169       1 shared_informer.go:320] Caches are synced for endpoint
	I0818 19:52:43.657809       1 shared_informer.go:320] Caches are synced for daemon sets
	I0818 19:52:43.704402       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0818 19:52:43.704458       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0818 19:52:43.713691       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:52:43.731644       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:52:44.148289       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:52:44.164960       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:52:44.165018       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:53:04.643741       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:53:04.653493       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.46"]
	E0818 19:53:04.653758       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:53:04.709741       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:53:04.710429       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:53:04.710530       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:53:04.717576       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:53:04.717964       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:53:04.719427       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:53:04.720992       1 config.go:197] "Starting service config controller"
	I0818 19:53:04.721055       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:53:04.721092       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:53:04.721111       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:53:04.721796       1 config.go:326] "Starting node config controller"
	I0818 19:53:04.721846       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:53:04.821958       1 shared_informer.go:320] Caches are synced for node config
	I0818 19:53:04.822113       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:53:04.822125       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:52:37.865400       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:52:40.294958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.46"]
	E0818 19:52:40.295113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:52:40.361443       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:52:40.361510       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:52:40.361557       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:52:40.366116       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:52:40.366614       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:52:40.366641       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:52:40.368505       1 config.go:197] "Starting service config controller"
	I0818 19:52:40.368552       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:52:40.368576       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:52:40.368587       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:52:40.369035       1 config.go:326] "Starting node config controller"
	I0818 19:52:40.369061       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:52:40.469464       1 shared_informer.go:320] Caches are synced for node config
	I0818 19:52:40.469550       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:52:40.469600       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72] <==
	E0818 19:52:40.259408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.259545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 19:52:40.259579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.259557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0818 19:52:40.259648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:52:40.259677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0818 19:52:40.259657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.259934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 19:52:40.259979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.260048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:52:40.260088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.260107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 19:52:40.260119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.260246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 19:52:40.260289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.262439       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 19:52:40.262493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.262528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 19:52:40.262555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.265505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0818 19:52:40.265542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.267615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 19:52:40.267691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0818 19:52:41.140557       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0818 19:52:58.076034       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6] <==
	I0818 19:53:01.886545       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:53:03.709456       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 19:53:03.709498       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 19:53:03.709519       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:53:03.709525       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:53:03.781475       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:53:03.781547       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:53:03.785716       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:53:03.785879       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:53:03.785918       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:53:03.785949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:53:03.886064       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.498280    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-147100"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: E0818 19:53:00.499161    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.46:8443: connect: connection refused" node="pause-147100"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.531909    3310 scope.go:117] "RemoveContainer" containerID="eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.534406    3310 scope.go:117] "RemoveContainer" containerID="5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.535762    3310 scope.go:117] "RemoveContainer" containerID="b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.538777    3310 scope.go:117] "RemoveContainer" containerID="84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: E0818 19:53:00.704598    3310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-147100?timeout=10s\": dial tcp 192.168.50.46:8443: connect: connection refused" interval="800ms"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.901225    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-147100"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: E0818 19:53:00.902620    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.46:8443: connect: connection refused" node="pause-147100"
	Aug 18 19:53:01 pause-147100 kubelet[3310]: I0818 19:53:01.704541    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-147100"
	Aug 18 19:53:03 pause-147100 kubelet[3310]: I0818 19:53:03.896118    3310 kubelet_node_status.go:111] "Node was previously registered" node="pause-147100"
	Aug 18 19:53:03 pause-147100 kubelet[3310]: I0818 19:53:03.896602    3310 kubelet_node_status.go:75] "Successfully registered node" node="pause-147100"
	Aug 18 19:53:03 pause-147100 kubelet[3310]: I0818 19:53:03.896689    3310 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 18 19:53:03 pause-147100 kubelet[3310]: I0818 19:53:03.897933    3310 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.084679    3310 apiserver.go:52] "Watching apiserver"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.103744    3310 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.124110    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94667bc5-72ce-4d1b-b0a3-e4160989b677-xtables-lock\") pod \"kube-proxy-rnm6w\" (UID: \"94667bc5-72ce-4d1b-b0a3-e4160989b677\") " pod="kube-system/kube-proxy-rnm6w"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.124419    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94667bc5-72ce-4d1b-b0a3-e4160989b677-lib-modules\") pod \"kube-proxy-rnm6w\" (UID: \"94667bc5-72ce-4d1b-b0a3-e4160989b677\") " pod="kube-system/kube-proxy-rnm6w"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.389029    3310 scope.go:117] "RemoveContainer" containerID="10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.389601    3310 scope.go:117] "RemoveContainer" containerID="f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f"
	Aug 18 19:53:10 pause-147100 kubelet[3310]: E0818 19:53:10.199831    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010790191887058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:53:10 pause-147100 kubelet[3310]: E0818 19:53:10.202103    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010790191887058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:53:11 pause-147100 kubelet[3310]: I0818 19:53:11.473057    3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 18 19:53:20 pause-147100 kubelet[3310]: E0818 19:53:20.208541    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010800204265738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:53:20 pause-147100 kubelet[3310]: E0818 19:53:20.208586    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010800204265738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-147100 -n pause-147100
helpers_test.go:261: (dbg) Run:  kubectl --context pause-147100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-147100 -n pause-147100
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-147100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-147100 logs -n 25: (1.292993327s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-735899             | cert-expiration-735899    | jenkins | v1.33.1 | 18 Aug 24 19:49 UTC | 18 Aug 24 19:50 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-433596          | force-systemd-flag-433596 | jenkins | v1.33.1 | 18 Aug 24 19:49 UTC | 18 Aug 24 19:50 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:49 UTC | 18 Aug 24 19:50 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-319765             | running-upgrade-319765    | jenkins | v1.33.1 | 18 Aug 24 19:49 UTC | 18 Aug 24 19:51 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:50 UTC |
	| start   | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:50 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-433596 ssh cat     | force-systemd-flag-433596 | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:50 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-433596          | force-systemd-flag-433596 | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:50 UTC |
	| start   | -p pause-147100 --memory=2048         | pause-147100              | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC | 18 Aug 24 19:51 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-288448 sudo           | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:50 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:51 UTC |
	| start   | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:51 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-319765             | running-upgrade-319765    | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:51 UTC |
	| start   | -p cert-options-272048                | cert-options-272048       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:52 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-288448 sudo           | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-288448                | NoKubernetes-288448       | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:51 UTC |
	| start   | -p kubernetes-upgrade-179876          | kubernetes-upgrade-179876 | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-147100                       | pause-147100              | jenkins | v1.33.1 | 18 Aug 24 19:51 UTC | 18 Aug 24 19:53 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-272048 ssh               | cert-options-272048       | jenkins | v1.33.1 | 18 Aug 24 19:52 UTC | 18 Aug 24 19:52 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-272048 -- sudo        | cert-options-272048       | jenkins | v1.33.1 | 18 Aug 24 19:52 UTC | 18 Aug 24 19:52 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-272048                | cert-options-272048       | jenkins | v1.33.1 | 18 Aug 24 19:52 UTC | 18 Aug 24 19:52 UTC |
	| start   | -p stopped-upgrade-729585             | minikube                  | jenkins | v1.26.0 | 18 Aug 24 19:52 UTC | 18 Aug 24 19:53 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| start   | -p cert-expiration-735899             | cert-expiration-735899    | jenkins | v1.33.1 | 18 Aug 24 19:53 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-729585 stop           | minikube                  | jenkins | v1.26.0 | 18 Aug 24 19:53 UTC | 18 Aug 24 19:53 UTC |
	| start   | -p stopped-upgrade-729585             | stopped-upgrade-729585    | jenkins | v1.33.1 | 18 Aug 24 19:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 19:53:23
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 19:53:23.352595   57130 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:53:23.352725   57130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:53:23.352733   57130 out.go:358] Setting ErrFile to fd 2...
	I0818 19:53:23.352737   57130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:53:23.352943   57130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:53:23.353426   57130 out.go:352] Setting JSON to false
	I0818 19:53:23.354383   57130 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5747,"bootTime":1724005056,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:53:23.354457   57130 start.go:139] virtualization: kvm guest
	I0818 19:53:23.356711   57130 out.go:177] * [stopped-upgrade-729585] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:53:23.358425   57130 notify.go:220] Checking for updates...
	I0818 19:53:23.358433   57130 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:53:23.360045   57130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:53:23.361460   57130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:53:23.362627   57130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:53:23.363838   57130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:53:23.364967   57130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:53:20.156793   56105 addons.go:510] duration metric: took 2.565819ms for enable addons: enabled=[]
	I0818 19:53:20.156825   56105 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:53:20.322236   56105 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:53:20.338197   56105 node_ready.go:35] waiting up to 6m0s for node "pause-147100" to be "Ready" ...
	I0818 19:53:20.341313   56105 node_ready.go:49] node "pause-147100" has status "Ready":"True"
	I0818 19:53:20.341336   56105 node_ready.go:38] duration metric: took 3.102633ms for node "pause-147100" to be "Ready" ...
	I0818 19:53:20.341346   56105 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 19:53:20.347299   56105 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4whpz" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:20.516097   56105 pod_ready.go:93] pod "coredns-6f6b679f8f-4whpz" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:20.516130   56105 pod_ready.go:82] duration metric: took 168.807538ms for pod "coredns-6f6b679f8f-4whpz" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:20.516143   56105 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:20.915861   56105 pod_ready.go:93] pod "etcd-pause-147100" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:20.915893   56105 pod_ready.go:82] duration metric: took 399.741616ms for pod "etcd-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:20.915906   56105 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:21.315962   56105 pod_ready.go:93] pod "kube-apiserver-pause-147100" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:21.315986   56105 pod_ready.go:82] duration metric: took 400.072219ms for pod "kube-apiserver-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:21.315997   56105 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:21.715514   56105 pod_ready.go:93] pod "kube-controller-manager-pause-147100" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:21.715540   56105 pod_ready.go:82] duration metric: took 399.535794ms for pod "kube-controller-manager-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:21.715554   56105 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rnm6w" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:22.115778   56105 pod_ready.go:93] pod "kube-proxy-rnm6w" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:22.115804   56105 pod_ready.go:82] duration metric: took 400.24263ms for pod "kube-proxy-rnm6w" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:22.115815   56105 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:22.515960   56105 pod_ready.go:93] pod "kube-scheduler-pause-147100" in "kube-system" namespace has status "Ready":"True"
	I0818 19:53:22.515989   56105 pod_ready.go:82] duration metric: took 400.165664ms for pod "kube-scheduler-pause-147100" in "kube-system" namespace to be "Ready" ...
	I0818 19:53:22.516002   56105 pod_ready.go:39] duration metric: took 2.174644643s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 19:53:22.516018   56105 api_server.go:52] waiting for apiserver process to appear ...
	I0818 19:53:22.516072   56105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:53:22.531297   56105 api_server.go:72] duration metric: took 2.377359991s to wait for apiserver process to appear ...
	I0818 19:53:22.531325   56105 api_server.go:88] waiting for apiserver healthz status ...
	I0818 19:53:22.531340   56105 api_server.go:253] Checking apiserver healthz at https://192.168.50.46:8443/healthz ...
	I0818 19:53:22.538312   56105 api_server.go:279] https://192.168.50.46:8443/healthz returned 200:
	ok
	I0818 19:53:22.539472   56105 api_server.go:141] control plane version: v1.31.0
	I0818 19:53:22.539495   56105 api_server.go:131] duration metric: took 8.162382ms to wait for apiserver health ...
	I0818 19:53:22.539504   56105 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 19:53:22.717995   56105 system_pods.go:59] 6 kube-system pods found
	I0818 19:53:22.718023   56105 system_pods.go:61] "coredns-6f6b679f8f-4whpz" [702cffef-e838-46b6-b925-639696e50224] Running
	I0818 19:53:22.718028   56105 system_pods.go:61] "etcd-pause-147100" [23f2639b-cc4c-478b-ba2a-ccb2526a2311] Running
	I0818 19:53:22.718031   56105 system_pods.go:61] "kube-apiserver-pause-147100" [2ba8fe66-ebc7-4203-936f-d1602287c32f] Running
	I0818 19:53:22.718034   56105 system_pods.go:61] "kube-controller-manager-pause-147100" [3973f8c2-6ff5-4339-9ddd-8693926d148a] Running
	I0818 19:53:22.718038   56105 system_pods.go:61] "kube-proxy-rnm6w" [94667bc5-72ce-4d1b-b0a3-e4160989b677] Running
	I0818 19:53:22.718041   56105 system_pods.go:61] "kube-scheduler-pause-147100" [3bba113f-c0fb-4940-abd4-34c41dcc06b2] Running
	I0818 19:53:22.718048   56105 system_pods.go:74] duration metric: took 178.537628ms to wait for pod list to return data ...
	I0818 19:53:22.718056   56105 default_sa.go:34] waiting for default service account to be created ...
	I0818 19:53:22.915550   56105 default_sa.go:45] found service account: "default"
	I0818 19:53:22.915580   56105 default_sa.go:55] duration metric: took 197.517182ms for default service account to be created ...
	I0818 19:53:22.915593   56105 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 19:53:23.117036   56105 system_pods.go:86] 6 kube-system pods found
	I0818 19:53:23.117065   56105 system_pods.go:89] "coredns-6f6b679f8f-4whpz" [702cffef-e838-46b6-b925-639696e50224] Running
	I0818 19:53:23.117073   56105 system_pods.go:89] "etcd-pause-147100" [23f2639b-cc4c-478b-ba2a-ccb2526a2311] Running
	I0818 19:53:23.117078   56105 system_pods.go:89] "kube-apiserver-pause-147100" [2ba8fe66-ebc7-4203-936f-d1602287c32f] Running
	I0818 19:53:23.117084   56105 system_pods.go:89] "kube-controller-manager-pause-147100" [3973f8c2-6ff5-4339-9ddd-8693926d148a] Running
	I0818 19:53:23.117088   56105 system_pods.go:89] "kube-proxy-rnm6w" [94667bc5-72ce-4d1b-b0a3-e4160989b677] Running
	I0818 19:53:23.117093   56105 system_pods.go:89] "kube-scheduler-pause-147100" [3bba113f-c0fb-4940-abd4-34c41dcc06b2] Running
	I0818 19:53:23.117101   56105 system_pods.go:126] duration metric: took 201.502281ms to wait for k8s-apps to be running ...
	I0818 19:53:23.117110   56105 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 19:53:23.117162   56105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:53:23.131914   56105 system_svc.go:56] duration metric: took 14.798743ms WaitForService to wait for kubelet
	I0818 19:53:23.131942   56105 kubeadm.go:582] duration metric: took 2.978011129s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:53:23.131966   56105 node_conditions.go:102] verifying NodePressure condition ...
	I0818 19:53:23.316715   56105 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 19:53:23.316736   56105 node_conditions.go:123] node cpu capacity is 2
	I0818 19:53:23.316747   56105 node_conditions.go:105] duration metric: took 184.775579ms to run NodePressure ...
	I0818 19:53:23.316757   56105 start.go:241] waiting for startup goroutines ...
	I0818 19:53:23.316763   56105 start.go:246] waiting for cluster config update ...
	I0818 19:53:23.316770   56105 start.go:255] writing updated cluster config ...
	I0818 19:53:23.317013   56105 ssh_runner.go:195] Run: rm -f paused
	I0818 19:53:23.370739   56105 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 19:53:23.372398   56105 out.go:177] * Done! kubectl is now configured to use "pause-147100" cluster and "default" namespace by default
	I0818 19:53:23.366354   57130 config.go:182] Loaded profile config "stopped-upgrade-729585": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0818 19:53:23.366736   57130 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:53:23.366773   57130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:53:23.384716   57130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I0818 19:53:23.385225   57130 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:53:23.385736   57130 main.go:141] libmachine: Using API Version  1
	I0818 19:53:23.385757   57130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:53:23.386160   57130 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:53:23.386376   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .DriverName
	I0818 19:53:23.387887   57130 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0818 19:53:23.388982   57130 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:53:23.389404   57130 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:53:23.389448   57130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:53:23.406972   57130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36357
	I0818 19:53:23.407638   57130 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:53:23.408243   57130 main.go:141] libmachine: Using API Version  1
	I0818 19:53:23.408268   57130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:53:23.408644   57130 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:53:23.408836   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .DriverName
	I0818 19:53:23.447095   57130 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 19:53:23.448224   57130 start.go:297] selected driver: kvm2
	I0818 19:53:23.448243   57130 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-729585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-729
585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.168 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 19:53:23.448374   57130 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:53:23.449337   57130 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:53:23.449406   57130 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:53:23.469629   57130 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:53:23.470072   57130 cni.go:84] Creating CNI manager for ""
	I0818 19:53:23.470096   57130 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:53:23.470163   57130 start.go:340] cluster config:
	{Name:stopped-upgrade-729585 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-729585 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.168 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0818 19:53:23.470298   57130 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:53:23.473099   57130 out.go:177] * Starting "stopped-upgrade-729585" primary control-plane node in "stopped-upgrade-729585" cluster
	I0818 19:53:23.474263   57130 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0818 19:53:23.474307   57130 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0818 19:53:23.474316   57130 cache.go:56] Caching tarball of preloaded images
	I0818 19:53:23.474420   57130 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:53:23.474436   57130 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0818 19:53:23.474556   57130 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/stopped-upgrade-729585/config.json ...
	I0818 19:53:23.474804   57130 start.go:360] acquireMachinesLock for stopped-upgrade-729585: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:53:23.474861   57130 start.go:364] duration metric: took 36.126µs to acquireMachinesLock for "stopped-upgrade-729585"
	I0818 19:53:23.474888   57130 start.go:96] Skipping create...Using existing machine configuration
	I0818 19:53:23.474897   57130 fix.go:54] fixHost starting: 
	I0818 19:53:23.475173   57130 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:53:23.475214   57130 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:53:23.489801   57130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0818 19:53:23.490176   57130 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:53:23.490641   57130 main.go:141] libmachine: Using API Version  1
	I0818 19:53:23.490658   57130 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:53:23.491005   57130 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:53:23.491200   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .DriverName
	I0818 19:53:23.491355   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .GetState
	I0818 19:53:23.492827   57130 fix.go:112] recreateIfNeeded on stopped-upgrade-729585: state=Stopped err=<nil>
	I0818 19:53:23.492863   57130 main.go:141] libmachine: (stopped-upgrade-729585) Calling .DriverName
	W0818 19:53:23.493026   57130 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 19:53:23.494924   57130 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-729585" ...
	
	
	==> CRI-O <==
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.040522296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e353c52-4e59-4a4e-9c96-244bf79e8932 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.042168479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fc1fa7f-2f85-4ff5-8546-10074caa68d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.042682712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010806042659226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fc1fa7f-2f85-4ff5-8546-10074caa68d9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.043523727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3016559b-a13f-49d5-a11d-68058ae06bf0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.043581402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3016559b-a13f-49d5-a11d-68058ae06bf0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.043812929Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724010784417835499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724010784407766032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724010780578025262,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c
6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724010780570715448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724010780560227336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4
bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724010780546885438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724010757146200285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724010756297121099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724010756225931513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause
-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724010756284405478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724010756133022541,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 27ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724010756028228221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: cafb8b2421c6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3016559b-a13f-49d5-a11d-68058ae06bf0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.085073129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07ea4aa2-14cd-407c-a9b2-f7f177ce542b name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.085146878Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07ea4aa2-14cd-407c-a9b2-f7f177ce542b name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.086860573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc2f5e3f-f2eb-41c8-b256-70d293ebd8ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.087241939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010806087216920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc2f5e3f-f2eb-41c8-b256-70d293ebd8ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.088025972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92a913f9-9d03-4e41-b8ee-b5eb57e8ab6d name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.088091751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92a913f9-9d03-4e41-b8ee-b5eb57e8ab6d name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.088405269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724010784417835499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724010784407766032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724010780578025262,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c
6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724010780570715448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724010780560227336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4
bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724010780546885438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724010757146200285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724010756297121099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724010756225931513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause
-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724010756284405478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724010756133022541,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 27ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724010756028228221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: cafb8b2421c6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92a913f9-9d03-4e41-b8ee-b5eb57e8ab6d name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.128869315Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fa813ca-c3e4-430f-9205-2ea0712694a3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.129062629Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-4whpz,Uid:702cffef-e838-46b6-b925-639696e50224,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755970302335,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:51:28.374786206Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&PodSandboxMetadata{Name:etcd-pause-147100,Uid:10b595936c2b444591befdf953e08d34,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1724010755914472341,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.46:2379,kubernetes.io/config.hash: 10b595936c2b444591befdf953e08d34,kubernetes.io/config.seen: 2024-08-18T19:51:23.331389763Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-147100,Uid:36455a54211e341dc9a4bd1cebfe81f4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755914127222,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 36455a54211e341dc9a4bd1cebfe81f4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.46:8443,kubernetes.io/config.hash: 36455a54211e341dc9a4bd1cebfe81f4,kubernetes.io/config.seen: 2024-08-18T19:51:23.331395920Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&PodSandboxMetadata{Name:kube-proxy-rnm6w,Uid:94667bc5-72ce-4d1b-b0a3-e4160989b677,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755764638430,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T19:51:28.008777345Z,kubernetes.io/config.source: api,},RuntimeHan
dler:,},&PodSandbox{Id:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-147100,Uid:27ea1fe9c03fccf5ef1ae77bc408f564,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755754300399,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ea1fe9c03fccf5ef1ae77bc408f564,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 27ea1fe9c03fccf5ef1ae77bc408f564,kubernetes.io/config.seen: 2024-08-18T19:51:23.331397476Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-147100,Uid:cafb8b2421c6c891904bf1e9c4348c24,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724010755722302058,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c6c891904bf1e9c4348c24,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cafb8b2421c6c891904bf1e9c4348c24,kubernetes.io/config.seen: 2024-08-18T19:51:23.331399106Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0fa813ca-c3e4-430f-9205-2ea0712694a3 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.129629642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4392d765-f9ef-4b80-aa95-be3709738667 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.129694534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4392d765-f9ef-4b80-aa95-be3709738667 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.129894998Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724010784417835499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724010784407766032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724010780578025262,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c
6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724010780570715448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724010780560227336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4
bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724010780546885438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4392d765-f9ef-4b80-aa95-be3709738667 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.139938912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6665c560-084f-463d-98d4-00ea3bbd4ec5 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.140001052Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6665c560-084f-463d-98d4-00ea3bbd4ec5 name=/runtime.v1.RuntimeService/Version
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.140999410Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18693c13-bfe5-433d-a69f-d44d48b5a5ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.141395055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010806141373949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18693c13-bfe5-433d-a69f-d44d48b5a5ab name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.141872922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82601823-7c83-478b-9a5e-60191e8d08c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.141923116Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82601823-7c83-478b-9a5e-60191e8d08c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 19:53:26 pause-147100 crio[2322]: time="2024-08-18 19:53:26.142145436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724010784417835499,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724010784407766032,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724010780578025262,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb8b2421c
6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724010780570715448,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724010780560227336,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4
bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724010780546885438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.
kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c,PodSandboxId:1890a54271e90404a71b183895746682f782fdbd29658f7631bff77e7bba5100,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724010757146200285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-4whpz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 702cffef-e838-46b6-b925-639696e50224,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52
134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f,PodSandboxId:a14ab7f71df89e33df25b9bed7f97c331dbbab7ae33e0dec909b669d13c8a3f0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724010756297121099,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod
.name: kube-proxy-rnm6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94667bc5-72ce-4d1b-b0a3-e4160989b677,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c,PodSandboxId:dd73d48408569458e43f075b02a290602712e90db9fc097d907ff66efe29b16e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724010756225931513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause
-147100,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36455a54211e341dc9a4bd1cebfe81f4,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765,PodSandboxId:facc15b44d0cec585e39031fa16186c0d9c5ea2575816bd15e175f3c819e0fe1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724010756284405478,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-147100,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 10b595936c2b444591befdf953e08d34,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee,PodSandboxId:3dd52943504ffcef9939c8c0c5522f6af5befc6a6ad300824ba26819b29b2fbe,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724010756133022541,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-147100,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 27ea1fe9c03fccf5ef1ae77bc408f564,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72,PodSandboxId:0add04a2ce4e381675e68ab15f2c145e9a0b9b176b69d4cf14f94e8f39d4463b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724010756028228221,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-147100,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: cafb8b2421c6c891904bf1e9c4348c24,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82601823-7c83-478b-9a5e-60191e8d08c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ef6fc3b6c7222       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   21 seconds ago      Running             kube-proxy                2                   a14ab7f71df89       kube-proxy-rnm6w
	ef36d36ec67dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago      Running             coredns                   2                   1890a54271e90       coredns-6f6b679f8f-4whpz
	93d4247e182f4       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   25 seconds ago      Running             kube-scheduler            2                   0add04a2ce4e3       kube-scheduler-pause-147100
	08fa87a8eddc2       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   25 seconds ago      Running             kube-controller-manager   2                   3dd52943504ff       kube-controller-manager-pause-147100
	bb12c02a28f97       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   25 seconds ago      Running             kube-apiserver            2                   dd73d48408569       kube-apiserver-pause-147100
	226f588a112ea       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   25 seconds ago      Running             etcd                      2                   facc15b44d0ce       etcd-pause-147100
	10b4e800c2a3d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   49 seconds ago      Exited              coredns                   1                   1890a54271e90       coredns-6f6b679f8f-4whpz
	f5626839dea18       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   49 seconds ago      Exited              kube-proxy                1                   a14ab7f71df89       kube-proxy-rnm6w
	eeaacf0c08f87       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   49 seconds ago      Exited              etcd                      1                   facc15b44d0ce       etcd-pause-147100
	5d7e54722e292       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   49 seconds ago      Exited              kube-apiserver            1                   dd73d48408569       kube-apiserver-pause-147100
	b9fdae3b3d9f7       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   50 seconds ago      Exited              kube-controller-manager   1                   3dd52943504ff       kube-controller-manager-pause-147100
	84ecf3a275a0e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   50 seconds ago      Exited              kube-scheduler            1                   0add04a2ce4e3       kube-scheduler-pause-147100
	
	
	==> coredns [10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54860 - 62269 "HINFO IN 538777352292154669.2928990509908764833. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013916586s
	
	
	==> coredns [ef36d36ec67dc541cf89dc8dd50ed7aab34fee8df07f5be56c0fcb255c0a7776] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59596 - 37857 "HINFO IN 6015369515724236624.7058985208755670938. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017176011s
	
	
	==> describe nodes <==
	Name:               pause-147100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-147100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=pause-147100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T19_51_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:51:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-147100
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:53:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:53:03 +0000   Sun, 18 Aug 2024 19:51:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:53:03 +0000   Sun, 18 Aug 2024 19:51:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:53:03 +0000   Sun, 18 Aug 2024 19:51:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:53:03 +0000   Sun, 18 Aug 2024 19:51:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.46
	  Hostname:    pause-147100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 066b47dfa6b44c869a948b1fd1b30c1d
	  System UUID:                066b47df-a6b4-4c86-9a94-8b1fd1b30c1d
	  Boot ID:                    c4c9de30-34c4-4cff-bc08-cafb0a689fcc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-4whpz                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     118s
	  kube-system                 etcd-pause-147100                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         2m3s
	  kube-system                 kube-apiserver-pause-147100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-pause-147100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-rnm6w                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-pause-147100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 117s               kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 46s                kube-proxy       
	  Normal  Starting                 2m3s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m3s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node pause-147100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node pause-147100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node pause-147100 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m2s               kubelet          Node pause-147100 status is now: NodeReady
	  Normal  RegisteredNode           119s               node-controller  Node pause-147100 event: Registered Node pause-147100 in Controller
	  Normal  RegisteredNode           43s                node-controller  Node pause-147100 event: Registered Node pause-147100 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-147100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-147100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-147100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                node-controller  Node pause-147100 event: Registered Node pause-147100 in Controller
	
	
	==> dmesg <==
	[  +6.227822] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.058549] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057086] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.194420] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.125209] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.293394] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +4.138546] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +4.293772] systemd-fstab-generator[888]: Ignoring "noauto" option for root device
	[  +0.063508] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.014725] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.088663] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.920769] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.118737] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.684592] kauditd_printk_skb: 106 callbacks suppressed
	[Aug18 19:52] systemd-fstab-generator[2240]: Ignoring "noauto" option for root device
	[  +0.167965] systemd-fstab-generator[2252]: Ignoring "noauto" option for root device
	[  +0.209541] systemd-fstab-generator[2266]: Ignoring "noauto" option for root device
	[  +0.128645] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.281135] systemd-fstab-generator[2306]: Ignoring "noauto" option for root device
	[  +3.437990] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +3.993546] kauditd_printk_skb: 195 callbacks suppressed
	[ +19.254395] systemd-fstab-generator[3303]: Ignoring "noauto" option for root device
	[Aug18 19:53] kauditd_printk_skb: 41 callbacks suppressed
	[  +7.072491] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.659588] systemd-fstab-generator[3750]: Ignoring "noauto" option for root device
	
	
	==> etcd [226f588a112ead5bc696fa20ecd080c7145eb10ed387a3a9a83e755cc7a003e8] <==
	{"level":"info","ts":"2024-08-18T19:53:06.964393Z","caller":"traceutil/trace.go:171","msg":"trace[1376264623] transaction","detail":"{read_only:false; response_revision:487; number_of_response:1; }","duration":"257.705429ms","start":"2024-08-18T19:53:06.706668Z","end":"2024-08-18T19:53:06.964374Z","steps":["trace[1376264623] 'process raft request'  (duration: 128.735493ms)","trace[1376264623] 'compare'  (duration: 128.478586ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:06.964574Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.380027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:discovery\" ","response":"range_response_count:1 size:694"}
	{"level":"info","ts":"2024-08-18T19:53:06.964707Z","caller":"traceutil/trace.go:171","msg":"trace[638292811] range","detail":"{range_begin:/registry/clusterrolebindings/system:discovery; range_end:; response_count:1; response_revision:487; }","duration":"257.520326ms","start":"2024-08-18T19:53:06.707176Z","end":"2024-08-18T19:53:06.964696Z","steps":["trace[638292811] 'agreement among raft nodes before linearized reading'  (duration: 257.311337ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:53:07.397566Z","caller":"traceutil/trace.go:171","msg":"trace[525141069] linearizableReadLoop","detail":"{readStateIndex:522; appliedIndex:521; }","duration":"425.8964ms","start":"2024-08-18T19:53:06.971651Z","end":"2024-08-18T19:53:07.397547Z","steps":["trace[525141069] 'read index received'  (duration: 341.415886ms)","trace[525141069] 'applied index is now lower than readState.Index'  (duration: 84.479495ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:53:07.397681Z","caller":"traceutil/trace.go:171","msg":"trace[1877656057] transaction","detail":"{read_only:false; response_revision:488; number_of_response:1; }","duration":"426.264706ms","start":"2024-08-18T19:53:06.971408Z","end":"2024-08-18T19:53:07.397672Z","steps":["trace[1877656057] 'process raft request'  (duration: 341.702874ms)","trace[1877656057] 'compare'  (duration: 84.141151ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:07.397758Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:53:06.971297Z","time spent":"426.407359ms","remote":"127.0.0.1:59246","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":664,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-147100.17eceaab9d4c48d0\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-147100.17eceaab9d4c48d0\" value_size:592 lease:1281996915723541988 >> failure:<>"}
	{"level":"warn","ts":"2024-08-18T19:53:07.398041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"426.377104ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:node-proxier\" ","response":"range_response_count:1 size:699"}
	{"level":"info","ts":"2024-08-18T19:53:07.398123Z","caller":"traceutil/trace.go:171","msg":"trace[1747235898] range","detail":"{range_begin:/registry/clusterrolebindings/system:node-proxier; range_end:; response_count:1; response_revision:488; }","duration":"426.440112ms","start":"2024-08-18T19:53:06.971647Z","end":"2024-08-18T19:53:07.398088Z","steps":["trace[1747235898] 'agreement among raft nodes before linearized reading'  (duration: 426.287675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:53:07.398156Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-18T19:53:06.971622Z","time spent":"426.526148ms","remote":"127.0.0.1:59490","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":722,"request content":"key:\"/registry/clusterrolebindings/system:node-proxier\" "}
	{"level":"warn","ts":"2024-08-18T19:53:07.398175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.779083ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T19:53:07.398283Z","caller":"traceutil/trace.go:171","msg":"trace[1845704875] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:488; }","duration":"105.893114ms","start":"2024-08-18T19:53:07.292377Z","end":"2024-08-18T19:53:07.398270Z","steps":["trace[1845704875] 'agreement among raft nodes before linearized reading'  (duration: 105.669704ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:53:07.608665Z","caller":"traceutil/trace.go:171","msg":"trace[42275576] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:523; }","duration":"124.795386ms","start":"2024-08-18T19:53:07.483856Z","end":"2024-08-18T19:53:07.608651Z","steps":["trace[42275576] 'read index received'  (duration: 54.936836ms)","trace[42275576] 'applied index is now lower than readState.Index'  (duration: 69.85815ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:07.608796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.921939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:node\" ","response":"range_response_count:1 size:603"}
	{"level":"info","ts":"2024-08-18T19:53:07.608821Z","caller":"traceutil/trace.go:171","msg":"trace[840061158] range","detail":"{range_begin:/registry/clusterrolebindings/system:node; range_end:; response_count:1; response_revision:490; }","duration":"124.960652ms","start":"2024-08-18T19:53:07.483852Z","end":"2024-08-18T19:53:07.608812Z","steps":["trace[840061158] 'agreement among raft nodes before linearized reading'  (duration: 124.871908ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:53:07.608926Z","caller":"traceutil/trace.go:171","msg":"trace[2029602159] transaction","detail":"{read_only:false; response_revision:490; number_of_response:1; }","duration":"125.751814ms","start":"2024-08-18T19:53:07.483155Z","end":"2024-08-18T19:53:07.608907Z","steps":["trace[2029602159] 'process raft request'  (duration: 55.680545ms)","trace[2029602159] 'compare'  (duration: 69.746244ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:53:07.902979Z","caller":"traceutil/trace.go:171","msg":"trace[963842668] linearizableReadLoop","detail":"{readStateIndex:526; appliedIndex:525; }","duration":"269.4848ms","start":"2024-08-18T19:53:07.633478Z","end":"2024-08-18T19:53:07.902963Z","steps":["trace[963842668] 'read index received'  (duration: 210.549723ms)","trace[963842668] 'applied index is now lower than readState.Index'  (duration: 58.93444ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:07.903129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"269.634354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:endpoint-controller\" ","response":"range_response_count:1 size:751"}
	{"level":"info","ts":"2024-08-18T19:53:07.903157Z","caller":"traceutil/trace.go:171","msg":"trace[963649035] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:endpoint-controller; range_end:; response_count:1; response_revision:492; }","duration":"269.67303ms","start":"2024-08-18T19:53:07.633475Z","end":"2024-08-18T19:53:07.903149Z","steps":["trace[963649035] 'agreement among raft nodes before linearized reading'  (duration: 269.583079ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-18T19:53:07.903221Z","caller":"traceutil/trace.go:171","msg":"trace[1794095830] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"273.79839ms","start":"2024-08-18T19:53:07.629233Z","end":"2024-08-18T19:53:07.903031Z","steps":["trace[1794095830] 'process raft request'  (duration: 214.859589ms)","trace[1794095830] 'compare'  (duration: 58.784393ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:53:08.115927Z","caller":"traceutil/trace.go:171","msg":"trace[1732186174] linearizableReadLoop","detail":"{readStateIndex:528; appliedIndex:527; }","duration":"162.223877ms","start":"2024-08-18T19:53:07.953689Z","end":"2024-08-18T19:53:08.115913Z","steps":["trace[1732186174] 'read index received'  (duration: 96.044863ms)","trace[1732186174] 'applied index is now lower than readState.Index'  (duration: 66.178258ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-18T19:53:08.115945Z","caller":"traceutil/trace.go:171","msg":"trace[1926493322] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"163.130089ms","start":"2024-08-18T19:53:07.952793Z","end":"2024-08-18T19:53:08.115923Z","steps":["trace[1926493322] 'process raft request'  (duration: 96.916952ms)","trace[1926493322] 'compare'  (duration: 65.948881ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-18T19:53:08.116112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.479193ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-18T19:53:08.116175Z","caller":"traceutil/trace.go:171","msg":"trace[205156128] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:494; }","duration":"124.531524ms","start":"2024-08-18T19:53:07.991628Z","end":"2024-08-18T19:53:08.116159Z","steps":["trace[205156128] 'agreement among raft nodes before linearized reading'  (duration: 124.462468ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-18T19:53:08.116064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.358136ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:controller:generic-garbage-collector\" ","response":"range_response_count:1 size:775"}
	{"level":"info","ts":"2024-08-18T19:53:08.116378Z","caller":"traceutil/trace.go:171","msg":"trace[63068789] range","detail":"{range_begin:/registry/clusterrolebindings/system:controller:generic-garbage-collector; range_end:; response_count:1; response_revision:494; }","duration":"162.6831ms","start":"2024-08-18T19:53:07.953686Z","end":"2024-08-18T19:53:08.116369Z","steps":["trace[63068789] 'agreement among raft nodes before linearized reading'  (duration: 162.289519ms)"],"step_count":1}
	
	
	==> etcd [eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765] <==
	{"level":"info","ts":"2024-08-18T19:52:38.877583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-18T19:52:38.877638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca received MsgPreVoteResp from cfbc0c4dab0211ca at term 2"}
	{"level":"info","ts":"2024-08-18T19:52:38.877679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca became candidate at term 3"}
	{"level":"info","ts":"2024-08-18T19:52:38.877703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca received MsgVoteResp from cfbc0c4dab0211ca at term 3"}
	{"level":"info","ts":"2024-08-18T19:52:38.877733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfbc0c4dab0211ca became leader at term 3"}
	{"level":"info","ts":"2024-08-18T19:52:38.877759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cfbc0c4dab0211ca elected leader cfbc0c4dab0211ca at term 3"}
	{"level":"info","ts":"2024-08-18T19:52:38.880706Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"cfbc0c4dab0211ca","local-member-attributes":"{Name:pause-147100 ClientURLs:[https://192.168.50.46:2379]}","request-path":"/0/members/cfbc0c4dab0211ca/attributes","cluster-id":"6ff72249b152fa27","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T19:52:38.880787Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:52:38.880942Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T19:52:38.880999Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T19:52:38.881019Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T19:52:38.881995Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:52:38.882053Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T19:52:38.882847Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.46:2379"}
	{"level":"info","ts":"2024-08-18T19:52:38.883014Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T19:52:47.713882Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-18T19:52:47.713966Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-147100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.46:2380"],"advertise-client-urls":["https://192.168.50.46:2379"]}
	{"level":"warn","ts":"2024-08-18T19:52:47.714035Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:52:47.714136Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:52:47.737097Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.46:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-18T19:52:47.737148Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.46:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-18T19:52:47.737202Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"cfbc0c4dab0211ca","current-leader-member-id":"cfbc0c4dab0211ca"}
	{"level":"info","ts":"2024-08-18T19:52:47.740862Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.46:2380"}
	{"level":"info","ts":"2024-08-18T19:52:47.741098Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.46:2380"}
	{"level":"info","ts":"2024-08-18T19:52:47.741165Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-147100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.46:2380"],"advertise-client-urls":["https://192.168.50.46:2379"]}
	
	
	==> kernel <==
	 19:53:26 up 2 min,  0 users,  load average: 0.62, 0.34, 0.13
	Linux pause-147100 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c] <==
	W0818 19:52:56.866280       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:56.873870       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:56.883572       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:56.895269       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:56.991167       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.005993       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.006033       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.068149       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.207718       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.231628       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.250019       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.350056       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.351516       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.397377       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.421857       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.426694       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.441901       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.443282       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.451030       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.516712       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.616929       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.712137       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.712148       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.754792       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 19:52:57.774433       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bb12c02a28f97a9ad669c984e6aa9509ad2eb1a3b16fd17c4ed76086a735f4f7] <==
	I0818 19:53:03.768588       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 19:53:03.768812       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 19:53:03.769858       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 19:53:03.770569       1 aggregator.go:171] initial CRD sync complete...
	I0818 19:53:03.770622       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 19:53:03.770647       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 19:53:03.770670       1 cache.go:39] Caches are synced for autoregister controller
	I0818 19:53:03.786153       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 19:53:03.786695       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 19:53:03.786895       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0818 19:53:03.786966       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0818 19:53:03.790137       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0818 19:53:03.792356       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 19:53:03.807560       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 19:53:03.815505       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 19:53:03.815575       1 policy_source.go:224] refreshing policies
	I0818 19:53:03.849257       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 19:53:04.672563       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0818 19:53:08.886095       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0818 19:53:08.918182       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0818 19:53:08.984133       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0818 19:53:09.047521       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0818 19:53:09.057964       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0818 19:53:11.524733       1 controller.go:615] quota admission added evaluator for: endpoints
	I0818 19:53:11.527238       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [08fa87a8eddc20620cbd323b83a9f344a7296dbada77277838927a637abebe9a] <==
	I0818 19:53:10.069450       1 shared_informer.go:320] Caches are synced for expand
	I0818 19:53:10.079082       1 shared_informer.go:320] Caches are synced for attach detach
	I0818 19:53:10.083938       1 shared_informer.go:320] Caches are synced for deployment
	I0818 19:53:10.084411       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0818 19:53:10.084786       1 shared_informer.go:320] Caches are synced for PVC protection
	I0818 19:53:10.084830       1 shared_informer.go:320] Caches are synced for HPA
	I0818 19:53:10.084868       1 shared_informer.go:320] Caches are synced for ephemeral
	I0818 19:53:10.086134       1 shared_informer.go:320] Caches are synced for daemon sets
	I0818 19:53:10.090210       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0818 19:53:10.091017       1 shared_informer.go:320] Caches are synced for endpoint
	I0818 19:53:10.099378       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0818 19:53:10.115387       1 shared_informer.go:320] Caches are synced for disruption
	I0818 19:53:10.117868       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0818 19:53:10.145277       1 shared_informer.go:320] Caches are synced for job
	I0818 19:53:10.152737       1 shared_informer.go:320] Caches are synced for cronjob
	I0818 19:53:10.184719       1 shared_informer.go:320] Caches are synced for stateful set
	I0818 19:53:10.233760       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0818 19:53:10.283723       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:53:10.292757       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:53:10.326237       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0818 19:53:10.710002       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:53:10.710039       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0818 19:53:10.745098       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:53:11.545197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="37.970333ms"
	I0818 19:53:11.545999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="340.577µs"
	
	
	==> kube-controller-manager [b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee] <==
	I0818 19:52:43.505752       1 shared_informer.go:320] Caches are synced for PVC protection
	I0818 19:52:43.509072       1 shared_informer.go:320] Caches are synced for node
	I0818 19:52:43.509134       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0818 19:52:43.509177       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0818 19:52:43.509198       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0818 19:52:43.509220       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0818 19:52:43.510545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-147100"
	I0818 19:52:43.513872       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0818 19:52:43.516662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="61.580081ms"
	I0818 19:52:43.516781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="65.233µs"
	I0818 19:52:43.554054       1 shared_informer.go:320] Caches are synced for attach detach
	I0818 19:52:43.606120       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0818 19:52:43.649244       1 shared_informer.go:320] Caches are synced for taint
	I0818 19:52:43.649642       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0818 19:52:43.649746       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-147100"
	I0818 19:52:43.649783       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0818 19:52:43.652169       1 shared_informer.go:320] Caches are synced for endpoint
	I0818 19:52:43.657809       1 shared_informer.go:320] Caches are synced for daemon sets
	I0818 19:52:43.704402       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0818 19:52:43.704458       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0818 19:52:43.713691       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:52:43.731644       1 shared_informer.go:320] Caches are synced for resource quota
	I0818 19:52:44.148289       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:52:44.164960       1 shared_informer.go:320] Caches are synced for garbage collector
	I0818 19:52:44.165018       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ef6fc3b6c722270285555cca0f22168a44637ebbf2fe6f93e09719780983d425] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:53:04.643741       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:53:04.653493       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.46"]
	E0818 19:53:04.653758       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:53:04.709741       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:53:04.710429       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:53:04.710530       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:53:04.717576       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:53:04.717964       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:53:04.719427       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:53:04.720992       1 config.go:197] "Starting service config controller"
	I0818 19:53:04.721055       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:53:04.721092       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:53:04.721111       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:53:04.721796       1 config.go:326] "Starting node config controller"
	I0818 19:53:04.721846       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:53:04.821958       1 shared_informer.go:320] Caches are synced for node config
	I0818 19:53:04.822113       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:53:04.822125       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 19:52:37.865400       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 19:52:40.294958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.46"]
	E0818 19:52:40.295113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 19:52:40.361443       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 19:52:40.361510       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 19:52:40.361557       1 server_linux.go:169] "Using iptables Proxier"
	I0818 19:52:40.366116       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 19:52:40.366614       1 server.go:483] "Version info" version="v1.31.0"
	I0818 19:52:40.366641       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:52:40.368505       1 config.go:197] "Starting service config controller"
	I0818 19:52:40.368552       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 19:52:40.368576       1 config.go:104] "Starting endpoint slice config controller"
	I0818 19:52:40.368587       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 19:52:40.369035       1 config.go:326] "Starting node config controller"
	I0818 19:52:40.369061       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 19:52:40.469464       1 shared_informer.go:320] Caches are synced for node config
	I0818 19:52:40.469550       1 shared_informer.go:320] Caches are synced for service config
	I0818 19:52:40.469600       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72] <==
	E0818 19:52:40.259408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.259545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 19:52:40.259579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.259557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0818 19:52:40.259648       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:52:40.259677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0818 19:52:40.259657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.259934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 19:52:40.259979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.260048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:52:40.260088       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.260107       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 19:52:40.260119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.260246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 19:52:40.260289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.262439       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 19:52:40.262493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.262528       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 19:52:40.262555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.265505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0818 19:52:40.265542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 19:52:40.267615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 19:52:40.267691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0818 19:52:41.140557       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0818 19:52:58.076034       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [93d4247e182f4c9f602ed09e28872d7b3ae860cb07b0b0a75c99cdda80a6d7d6] <==
	I0818 19:53:01.886545       1 serving.go:386] Generated self-signed cert in-memory
	W0818 19:53:03.709456       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 19:53:03.709498       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 19:53:03.709519       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:53:03.709525       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:53:03.781475       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 19:53:03.781547       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 19:53:03.785716       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 19:53:03.785879       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 19:53:03.785918       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:53:03.785949       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 19:53:03.886064       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.498280    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-147100"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: E0818 19:53:00.499161    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.46:8443: connect: connection refused" node="pause-147100"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.531909    3310 scope.go:117] "RemoveContainer" containerID="eeaacf0c08f87d53d4326296d0c9b7cfe8518dbcfa933c7484223aed66249765"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.534406    3310 scope.go:117] "RemoveContainer" containerID="5d7e54722e292d2b6ea5aef1d94abcc74b98bd5626348efaf2d6882d73142d0c"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.535762    3310 scope.go:117] "RemoveContainer" containerID="b9fdae3b3d9f783aafa9f447eceb4b2925908236ecbf70da8a00f3a7361e6dee"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.538777    3310 scope.go:117] "RemoveContainer" containerID="84ecf3a275a0e51c8341eb986c77873c9932619c28ff60082d6361e72cc0fe72"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: E0818 19:53:00.704598    3310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-147100?timeout=10s\": dial tcp 192.168.50.46:8443: connect: connection refused" interval="800ms"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: I0818 19:53:00.901225    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-147100"
	Aug 18 19:53:00 pause-147100 kubelet[3310]: E0818 19:53:00.902620    3310 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.46:8443: connect: connection refused" node="pause-147100"
	Aug 18 19:53:01 pause-147100 kubelet[3310]: I0818 19:53:01.704541    3310 kubelet_node_status.go:72] "Attempting to register node" node="pause-147100"
	Aug 18 19:53:03 pause-147100 kubelet[3310]: I0818 19:53:03.896118    3310 kubelet_node_status.go:111] "Node was previously registered" node="pause-147100"
	Aug 18 19:53:03 pause-147100 kubelet[3310]: I0818 19:53:03.896602    3310 kubelet_node_status.go:75] "Successfully registered node" node="pause-147100"
	Aug 18 19:53:03 pause-147100 kubelet[3310]: I0818 19:53:03.896689    3310 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 18 19:53:03 pause-147100 kubelet[3310]: I0818 19:53:03.897933    3310 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.084679    3310 apiserver.go:52] "Watching apiserver"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.103744    3310 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.124110    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94667bc5-72ce-4d1b-b0a3-e4160989b677-xtables-lock\") pod \"kube-proxy-rnm6w\" (UID: \"94667bc5-72ce-4d1b-b0a3-e4160989b677\") " pod="kube-system/kube-proxy-rnm6w"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.124419    3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94667bc5-72ce-4d1b-b0a3-e4160989b677-lib-modules\") pod \"kube-proxy-rnm6w\" (UID: \"94667bc5-72ce-4d1b-b0a3-e4160989b677\") " pod="kube-system/kube-proxy-rnm6w"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.389029    3310 scope.go:117] "RemoveContainer" containerID="10b4e800c2a3da926df0685ab0af9ba3ee2eb98456b799fed08fd7d2921ac35c"
	Aug 18 19:53:04 pause-147100 kubelet[3310]: I0818 19:53:04.389601    3310 scope.go:117] "RemoveContainer" containerID="f5626839dea18785b79467cae459ead3ae3ca044e543e458f0d4bdde5140bd7f"
	Aug 18 19:53:10 pause-147100 kubelet[3310]: E0818 19:53:10.199831    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010790191887058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:53:10 pause-147100 kubelet[3310]: E0818 19:53:10.202103    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010790191887058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:53:11 pause-147100 kubelet[3310]: I0818 19:53:11.473057    3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 18 19:53:20 pause-147100 kubelet[3310]: E0818 19:53:20.208541    3310 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010800204265738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 19:53:20 pause-147100 kubelet[3310]: E0818 19:53:20.208586    3310 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724010800204265738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-147100 -n pause-147100
helpers_test.go:261: (dbg) Run:  kubectl --context pause-147100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (87.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (277.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-247539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-247539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m36.760300111s)

                                                
                                                
-- stdout --
	* [old-k8s-version-247539] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-247539" primary control-plane node in "old-k8s-version-247539" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:57:57.631272   67330 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:57:57.631374   67330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:57:57.631395   67330 out.go:358] Setting ErrFile to fd 2...
	I0818 19:57:57.631402   67330 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:57:57.631599   67330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:57:57.632162   67330 out.go:352] Setting JSON to false
	I0818 19:57:57.633787   67330 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6022,"bootTime":1724005056,"procs":303,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:57:57.633927   67330 start.go:139] virtualization: kvm guest
	I0818 19:57:57.635982   67330 out.go:177] * [old-k8s-version-247539] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:57:57.638154   67330 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:57:57.638163   67330 notify.go:220] Checking for updates...
	I0818 19:57:57.639908   67330 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:57:57.641396   67330 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:57:57.642819   67330 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:57:57.644128   67330 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:57:57.645540   67330 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:57:57.647796   67330 config.go:182] Loaded profile config "bridge-754609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:57:57.647907   67330 config.go:182] Loaded profile config "flannel-754609": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:57:57.648002   67330 config.go:182] Loaded profile config "kubernetes-upgrade-179876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:57:57.648102   67330 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:57:57.689373   67330 out.go:177] * Using the kvm2 driver based on user configuration
	I0818 19:57:57.690772   67330 start.go:297] selected driver: kvm2
	I0818 19:57:57.690797   67330 start.go:901] validating driver "kvm2" against <nil>
	I0818 19:57:57.690813   67330 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:57:57.691597   67330 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:57:57.691686   67330 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 19:57:57.710728   67330 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 19:57:57.710785   67330 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 19:57:57.711052   67330 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:57:57.711111   67330 cni.go:84] Creating CNI manager for ""
	I0818 19:57:57.711125   67330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:57:57.711138   67330 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 19:57:57.711203   67330 start.go:340] cluster config:
	{Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:57:57.711315   67330 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:57:57.713830   67330 out.go:177] * Starting "old-k8s-version-247539" primary control-plane node in "old-k8s-version-247539" cluster
	I0818 19:57:57.715569   67330 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 19:57:57.715604   67330 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0818 19:57:57.715613   67330 cache.go:56] Caching tarball of preloaded images
	I0818 19:57:57.715707   67330 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 19:57:57.715720   67330 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0818 19:57:57.715807   67330 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 19:57:57.715824   67330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json: {Name:mkb4188f9b593942a2eada7595484de4b7d28645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:57:57.715974   67330 start.go:360] acquireMachinesLock for old-k8s-version-247539: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 19:57:57.716009   67330 start.go:364] duration metric: took 18.162µs to acquireMachinesLock for "old-k8s-version-247539"
	I0818 19:57:57.716032   67330 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 19:57:57.716103   67330 start.go:125] createHost starting for "" (driver="kvm2")
	I0818 19:57:57.717820   67330 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0818 19:57:57.718000   67330 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 19:57:57.718047   67330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:57:57.736836   67330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38881
	I0818 19:57:57.737315   67330 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:57:57.738421   67330 main.go:141] libmachine: Using API Version  1
	I0818 19:57:57.738444   67330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:57:57.738861   67330 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:57:57.739197   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 19:57:57.739369   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 19:57:57.739567   67330 start.go:159] libmachine.API.Create for "old-k8s-version-247539" (driver="kvm2")
	I0818 19:57:57.739593   67330 client.go:168] LocalClient.Create starting
	I0818 19:57:57.739638   67330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem
	I0818 19:57:57.739684   67330 main.go:141] libmachine: Decoding PEM data...
	I0818 19:57:57.739706   67330 main.go:141] libmachine: Parsing certificate...
	I0818 19:57:57.739789   67330 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem
	I0818 19:57:57.739815   67330 main.go:141] libmachine: Decoding PEM data...
	I0818 19:57:57.739834   67330 main.go:141] libmachine: Parsing certificate...
	I0818 19:57:57.739863   67330 main.go:141] libmachine: Running pre-create checks...
	I0818 19:57:57.739880   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .PreCreateCheck
	I0818 19:57:57.740278   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 19:57:57.740661   67330 main.go:141] libmachine: Creating machine...
	I0818 19:57:57.740676   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .Create
	I0818 19:57:57.740819   67330 main.go:141] libmachine: (old-k8s-version-247539) Creating KVM machine...
	I0818 19:57:57.742438   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found existing default KVM network
	I0818 19:57:57.744320   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:57.744138   67359 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b9:2d:b9} reservation:<nil>}
	I0818 19:57:57.745875   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:57.745799   67359 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000348110}
	I0818 19:57:57.746022   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | created network xml: 
	I0818 19:57:57.746047   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | <network>
	I0818 19:57:57.746062   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   <name>mk-old-k8s-version-247539</name>
	I0818 19:57:57.746075   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   <dns enable='no'/>
	I0818 19:57:57.746085   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   
	I0818 19:57:57.746099   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0818 19:57:57.746109   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |     <dhcp>
	I0818 19:57:57.746131   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0818 19:57:57.746141   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |     </dhcp>
	I0818 19:57:57.746147   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   </ip>
	I0818 19:57:57.746155   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG |   
	I0818 19:57:57.746163   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | </network>
	I0818 19:57:57.746176   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | 
	I0818 19:57:57.751775   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | trying to create private KVM network mk-old-k8s-version-247539 192.168.50.0/24...
	I0818 19:57:57.864938   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting up store path in /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539 ...
	I0818 19:57:57.864962   67330 main.go:141] libmachine: (old-k8s-version-247539) Building disk image from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 19:57:57.864978   67330 main.go:141] libmachine: (old-k8s-version-247539) Downloading /home/jenkins/minikube-integration/19423-7747/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso...
	I0818 19:57:57.864992   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | private KVM network mk-old-k8s-version-247539 192.168.50.0/24 created
	I0818 19:57:57.865003   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:57.864237   67359 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:57:58.184522   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:58.181782   67359 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa...
	I0818 19:57:58.575954   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:58.575836   67359 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/old-k8s-version-247539.rawdisk...
	I0818 19:57:58.575997   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Writing magic tar header
	I0818 19:57:58.576014   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Writing SSH key tar header
	I0818 19:57:58.576027   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:57:58.575958   67359 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539 ...
	I0818 19:57:58.576043   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539
	I0818 19:57:58.576133   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539 (perms=drwx------)
	I0818 19:57:58.576162   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube/machines (perms=drwxr-xr-x)
	I0818 19:57:58.576175   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube/machines
	I0818 19:57:58.576193   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:57:58.576206   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19423-7747
	I0818 19:57:58.576222   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0818 19:57:58.576241   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home/jenkins
	I0818 19:57:58.576256   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747/.minikube (perms=drwxr-xr-x)
	I0818 19:57:58.576268   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Checking permissions on dir: /home
	I0818 19:57:58.576282   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration/19423-7747 (perms=drwxrwxr-x)
	I0818 19:57:58.576294   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0818 19:57:58.576307   67330 main.go:141] libmachine: (old-k8s-version-247539) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0818 19:57:58.576319   67330 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 19:57:58.576333   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Skipping /home - not owner
	I0818 19:57:58.577273   67330 main.go:141] libmachine: (old-k8s-version-247539) define libvirt domain using xml: 
	I0818 19:57:58.577315   67330 main.go:141] libmachine: (old-k8s-version-247539) <domain type='kvm'>
	I0818 19:57:58.577326   67330 main.go:141] libmachine: (old-k8s-version-247539)   <name>old-k8s-version-247539</name>
	I0818 19:57:58.577337   67330 main.go:141] libmachine: (old-k8s-version-247539)   <memory unit='MiB'>2200</memory>
	I0818 19:57:58.577346   67330 main.go:141] libmachine: (old-k8s-version-247539)   <vcpu>2</vcpu>
	I0818 19:57:58.577356   67330 main.go:141] libmachine: (old-k8s-version-247539)   <features>
	I0818 19:57:58.577364   67330 main.go:141] libmachine: (old-k8s-version-247539)     <acpi/>
	I0818 19:57:58.577375   67330 main.go:141] libmachine: (old-k8s-version-247539)     <apic/>
	I0818 19:57:58.577392   67330 main.go:141] libmachine: (old-k8s-version-247539)     <pae/>
	I0818 19:57:58.577403   67330 main.go:141] libmachine: (old-k8s-version-247539)     
	I0818 19:57:58.577411   67330 main.go:141] libmachine: (old-k8s-version-247539)   </features>
	I0818 19:57:58.577422   67330 main.go:141] libmachine: (old-k8s-version-247539)   <cpu mode='host-passthrough'>
	I0818 19:57:58.577431   67330 main.go:141] libmachine: (old-k8s-version-247539)   
	I0818 19:57:58.577442   67330 main.go:141] libmachine: (old-k8s-version-247539)   </cpu>
	I0818 19:57:58.577450   67330 main.go:141] libmachine: (old-k8s-version-247539)   <os>
	I0818 19:57:58.577460   67330 main.go:141] libmachine: (old-k8s-version-247539)     <type>hvm</type>
	I0818 19:57:58.577472   67330 main.go:141] libmachine: (old-k8s-version-247539)     <boot dev='cdrom'/>
	I0818 19:57:58.577482   67330 main.go:141] libmachine: (old-k8s-version-247539)     <boot dev='hd'/>
	I0818 19:57:58.577494   67330 main.go:141] libmachine: (old-k8s-version-247539)     <bootmenu enable='no'/>
	I0818 19:57:58.577505   67330 main.go:141] libmachine: (old-k8s-version-247539)   </os>
	I0818 19:57:58.577515   67330 main.go:141] libmachine: (old-k8s-version-247539)   <devices>
	I0818 19:57:58.577524   67330 main.go:141] libmachine: (old-k8s-version-247539)     <disk type='file' device='cdrom'>
	I0818 19:57:58.577542   67330 main.go:141] libmachine: (old-k8s-version-247539)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/boot2docker.iso'/>
	I0818 19:57:58.577554   67330 main.go:141] libmachine: (old-k8s-version-247539)       <target dev='hdc' bus='scsi'/>
	I0818 19:57:58.577567   67330 main.go:141] libmachine: (old-k8s-version-247539)       <readonly/>
	I0818 19:57:58.577577   67330 main.go:141] libmachine: (old-k8s-version-247539)     </disk>
	I0818 19:57:58.577587   67330 main.go:141] libmachine: (old-k8s-version-247539)     <disk type='file' device='disk'>
	I0818 19:57:58.577600   67330 main.go:141] libmachine: (old-k8s-version-247539)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0818 19:57:58.577623   67330 main.go:141] libmachine: (old-k8s-version-247539)       <source file='/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/old-k8s-version-247539.rawdisk'/>
	I0818 19:57:58.577637   67330 main.go:141] libmachine: (old-k8s-version-247539)       <target dev='hda' bus='virtio'/>
	I0818 19:57:58.577646   67330 main.go:141] libmachine: (old-k8s-version-247539)     </disk>
	I0818 19:57:58.577659   67330 main.go:141] libmachine: (old-k8s-version-247539)     <interface type='network'>
	I0818 19:57:58.577672   67330 main.go:141] libmachine: (old-k8s-version-247539)       <source network='mk-old-k8s-version-247539'/>
	I0818 19:57:58.577684   67330 main.go:141] libmachine: (old-k8s-version-247539)       <model type='virtio'/>
	I0818 19:57:58.577694   67330 main.go:141] libmachine: (old-k8s-version-247539)     </interface>
	I0818 19:57:58.577705   67330 main.go:141] libmachine: (old-k8s-version-247539)     <interface type='network'>
	I0818 19:57:58.577717   67330 main.go:141] libmachine: (old-k8s-version-247539)       <source network='default'/>
	I0818 19:57:58.577736   67330 main.go:141] libmachine: (old-k8s-version-247539)       <model type='virtio'/>
	I0818 19:57:58.577749   67330 main.go:141] libmachine: (old-k8s-version-247539)     </interface>
	I0818 19:57:58.577757   67330 main.go:141] libmachine: (old-k8s-version-247539)     <serial type='pty'>
	I0818 19:57:58.577768   67330 main.go:141] libmachine: (old-k8s-version-247539)       <target port='0'/>
	I0818 19:57:58.577780   67330 main.go:141] libmachine: (old-k8s-version-247539)     </serial>
	I0818 19:57:58.577792   67330 main.go:141] libmachine: (old-k8s-version-247539)     <console type='pty'>
	I0818 19:57:58.577802   67330 main.go:141] libmachine: (old-k8s-version-247539)       <target type='serial' port='0'/>
	I0818 19:57:58.577813   67330 main.go:141] libmachine: (old-k8s-version-247539)     </console>
	I0818 19:57:58.577824   67330 main.go:141] libmachine: (old-k8s-version-247539)     <rng model='virtio'>
	I0818 19:57:58.577836   67330 main.go:141] libmachine: (old-k8s-version-247539)       <backend model='random'>/dev/random</backend>
	I0818 19:57:58.577847   67330 main.go:141] libmachine: (old-k8s-version-247539)     </rng>
	I0818 19:57:58.577855   67330 main.go:141] libmachine: (old-k8s-version-247539)     
	I0818 19:57:58.577865   67330 main.go:141] libmachine: (old-k8s-version-247539)     
	I0818 19:57:58.577874   67330 main.go:141] libmachine: (old-k8s-version-247539)   </devices>
	I0818 19:57:58.577884   67330 main.go:141] libmachine: (old-k8s-version-247539) </domain>
	I0818 19:57:58.577894   67330 main.go:141] libmachine: (old-k8s-version-247539) 
	I0818 19:57:58.583173   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:64:f0:ab in network default
	I0818 19:57:58.583914   67330 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 19:57:58.583937   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:57:58.584805   67330 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 19:57:58.585149   67330 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 19:57:58.585834   67330 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 19:57:58.586674   67330 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 19:58:00.450709   67330 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 19:58:00.451936   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:00.452392   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:00.452434   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:00.452377   67359 retry.go:31] will retry after 241.241265ms: waiting for machine to come up
	I0818 19:58:00.695927   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:00.696655   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:00.696676   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:00.696568   67359 retry.go:31] will retry after 375.625845ms: waiting for machine to come up
	I0818 19:58:01.074552   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:01.075182   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:01.075329   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:01.075291   67359 retry.go:31] will retry after 377.725453ms: waiting for machine to come up
	I0818 19:58:01.454965   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:01.455653   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:01.455677   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:01.455600   67359 retry.go:31] will retry after 490.039131ms: waiting for machine to come up
	I0818 19:58:01.946926   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:01.947504   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:01.947537   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:01.947457   67359 retry.go:31] will retry after 578.750617ms: waiting for machine to come up
	I0818 19:58:02.527972   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:02.528785   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:02.528807   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:02.528707   67359 retry.go:31] will retry after 627.941976ms: waiting for machine to come up
	I0818 19:58:03.158619   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:03.159084   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:03.159111   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:03.159041   67359 retry.go:31] will retry after 1.110108246s: waiting for machine to come up
	I0818 19:58:04.270727   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:04.271362   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:04.271411   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:04.271303   67359 retry.go:31] will retry after 998.273726ms: waiting for machine to come up
	I0818 19:58:05.270986   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:05.271554   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:05.271582   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:05.271519   67359 retry.go:31] will retry after 1.240428493s: waiting for machine to come up
	I0818 19:58:06.513857   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:06.514626   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:06.514653   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:06.514588   67359 retry.go:31] will retry after 2.299995679s: waiting for machine to come up
	I0818 19:58:08.816324   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:08.816956   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:08.816985   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:08.816896   67359 retry.go:31] will retry after 2.65092011s: waiting for machine to come up
	I0818 19:58:11.469337   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:11.469815   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:11.469847   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:11.469767   67359 retry.go:31] will retry after 2.606526377s: waiting for machine to come up
	I0818 19:58:14.078595   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:14.079224   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:14.079245   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:14.079160   67359 retry.go:31] will retry after 3.862765663s: waiting for machine to come up
	I0818 19:58:17.943070   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:17.943640   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 19:58:17.943666   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 19:58:17.943598   67359 retry.go:31] will retry after 5.486952781s: waiting for machine to come up
	I0818 19:58:23.432278   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:23.432756   67330 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 19:58:23.432777   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:23.432784   67330 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 19:58:23.433120   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539
	I0818 19:58:23.510525   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 19:58:23.510566   67330 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 19:58:23.510581   67330 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 19:58:23.513568   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:23.514021   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539
	I0818 19:58:23.514051   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find defined IP address of network mk-old-k8s-version-247539 interface with MAC address 52:54:00:5a:f6:41
	I0818 19:58:23.514200   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 19:58:23.514223   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 19:58:23.514249   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 19:58:23.514264   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 19:58:23.514294   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 19:58:23.517950   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: exit status 255: 
	I0818 19:58:23.517974   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0818 19:58:23.517984   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | command : exit 0
	I0818 19:58:23.517991   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | err     : exit status 255
	I0818 19:58:23.518002   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | output  : 
	I0818 19:58:26.519525   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 19:58:26.522568   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:26.522996   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:26.523036   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:26.523222   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 19:58:26.523248   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 19:58:26.523288   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 19:58:26.523301   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 19:58:26.523315   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 19:58:26.651465   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 19:58:26.651749   67330 main.go:141] libmachine: (old-k8s-version-247539) KVM machine creation complete!
	I0818 19:58:26.652060   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 19:58:26.652589   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 19:58:26.652794   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 19:58:26.652945   67330 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0818 19:58:26.652959   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 19:58:26.654251   67330 main.go:141] libmachine: Detecting operating system of created instance...
	I0818 19:58:26.654267   67330 main.go:141] libmachine: Waiting for SSH to be available...
	I0818 19:58:26.654274   67330 main.go:141] libmachine: Getting to WaitForSSH function...
	I0818 19:58:26.654283   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:26.656571   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:26.656974   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:26.657009   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:26.657091   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:26.657281   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:26.657448   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:26.657603   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:26.657763   67330 main.go:141] libmachine: Using SSH client type: native
	I0818 19:58:26.658024   67330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 19:58:26.658042   67330 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0818 19:58:26.771790   67330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:58:26.771814   67330 main.go:141] libmachine: Detecting the provisioner...
	I0818 19:58:26.771825   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:26.776184   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:26.776616   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:26.776648   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:26.776804   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:26.777010   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:26.777209   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:26.777370   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:26.777544   67330 main.go:141] libmachine: Using SSH client type: native
	I0818 19:58:26.777705   67330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 19:58:26.777715   67330 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0818 19:58:26.900605   67330 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0818 19:58:26.900706   67330 main.go:141] libmachine: found compatible host: buildroot
	I0818 19:58:26.900719   67330 main.go:141] libmachine: Provisioning with buildroot...
	I0818 19:58:26.900731   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 19:58:26.900985   67330 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 19:58:26.901007   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 19:58:26.901196   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:26.903745   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.096297   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:27.096328   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.096490   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:27.096749   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:27.096961   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:27.097125   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:27.097305   67330 main.go:141] libmachine: Using SSH client type: native
	I0818 19:58:27.097517   67330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 19:58:27.097536   67330 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 19:58:27.228583   67330 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 19:58:27.228613   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:27.445006   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.445432   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:27.445470   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.445760   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:27.445959   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:27.446151   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:27.446289   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:27.446497   67330 main.go:141] libmachine: Using SSH client type: native
	I0818 19:58:27.446699   67330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 19:58:27.446724   67330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 19:58:27.580472   67330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:58:27.580505   67330 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 19:58:27.580535   67330 buildroot.go:174] setting up certificates
	I0818 19:58:27.580549   67330 provision.go:84] configureAuth start
	I0818 19:58:27.580563   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 19:58:27.580846   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 19:58:27.584235   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.584614   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:27.584644   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.584813   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:27.587440   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.587902   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:27.587929   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.588104   67330 provision.go:143] copyHostCerts
	I0818 19:58:27.588149   67330 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 19:58:27.588163   67330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 19:58:27.588226   67330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 19:58:27.588349   67330 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 19:58:27.588359   67330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 19:58:27.588383   67330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 19:58:27.588449   67330 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 19:58:27.588456   67330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 19:58:27.588483   67330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 19:58:27.588541   67330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 19:58:27.674599   67330 provision.go:177] copyRemoteCerts
	I0818 19:58:27.674645   67330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 19:58:27.674663   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:27.677681   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.678056   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:27.678087   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.678290   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:27.678518   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:27.678700   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:27.678834   67330 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 19:58:27.777194   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 19:58:27.809123   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 19:58:27.836736   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 19:58:27.862771   67330 provision.go:87] duration metric: took 282.208275ms to configureAuth
	I0818 19:58:27.862796   67330 buildroot.go:189] setting minikube options for container-runtime
	I0818 19:58:27.862927   67330 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 19:58:27.863006   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:27.866440   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.866863   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:27.866893   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:27.867047   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:27.867252   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:27.867475   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:27.867650   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:27.867835   67330 main.go:141] libmachine: Using SSH client type: native
	I0818 19:58:27.868009   67330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 19:58:27.868032   67330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 19:58:28.149928   67330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 19:58:28.149955   67330 main.go:141] libmachine: Checking connection to Docker...
	I0818 19:58:28.149965   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetURL
	I0818 19:58:28.151235   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using libvirt version 6000000
	I0818 19:58:28.153459   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.153788   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:28.153814   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.153962   67330 main.go:141] libmachine: Docker is up and running!
	I0818 19:58:28.153976   67330 main.go:141] libmachine: Reticulating splines...
	I0818 19:58:28.153984   67330 client.go:171] duration metric: took 30.414379591s to LocalClient.Create
	I0818 19:58:28.154008   67330 start.go:167] duration metric: took 30.414441056s to libmachine.API.Create "old-k8s-version-247539"
	I0818 19:58:28.154022   67330 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 19:58:28.154034   67330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 19:58:28.154049   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 19:58:28.154302   67330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 19:58:28.154325   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:28.156469   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.156825   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:28.156855   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.157028   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:28.157204   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:28.157360   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:28.157486   67330 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 19:58:28.245979   67330 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 19:58:28.250754   67330 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 19:58:28.250773   67330 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 19:58:28.250833   67330 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 19:58:28.250934   67330 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 19:58:28.251049   67330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 19:58:28.260674   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:58:28.284451   67330 start.go:296] duration metric: took 130.415166ms for postStartSetup
	I0818 19:58:28.284519   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 19:58:28.285111   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 19:58:28.288038   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.288488   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:28.288520   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.288793   67330 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 19:58:28.288957   67330 start.go:128] duration metric: took 30.572843413s to createHost
	I0818 19:58:28.288977   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:28.291482   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.291756   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:28.291781   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.291932   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:28.292098   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:28.292262   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:28.292411   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:28.292535   67330 main.go:141] libmachine: Using SSH client type: native
	I0818 19:58:28.292735   67330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 19:58:28.292747   67330 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 19:58:28.404275   67330 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011108.381683474
	
	I0818 19:58:28.404300   67330 fix.go:216] guest clock: 1724011108.381683474
	I0818 19:58:28.404310   67330 fix.go:229] Guest: 2024-08-18 19:58:28.381683474 +0000 UTC Remote: 2024-08-18 19:58:28.288967742 +0000 UTC m=+30.695821952 (delta=92.715732ms)
	I0818 19:58:28.404334   67330 fix.go:200] guest clock delta is within tolerance: 92.715732ms
	I0818 19:58:28.404339   67330 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 30.688319361s
	I0818 19:58:28.404362   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 19:58:28.404646   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 19:58:28.407775   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.408143   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:28.408170   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.408313   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 19:58:28.408843   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 19:58:28.409027   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 19:58:28.409131   67330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 19:58:28.409182   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:28.409264   67330 ssh_runner.go:195] Run: cat /version.json
	I0818 19:58:28.409295   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 19:58:28.411929   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.412196   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.412287   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:28.412324   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.412443   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:28.412531   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:28.412557   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:28.412619   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:28.412735   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 19:58:28.412801   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:28.412890   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 19:58:28.412960   67330 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 19:58:28.413014   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 19:58:28.413142   67330 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 19:58:28.497471   67330 ssh_runner.go:195] Run: systemctl --version
	I0818 19:58:28.523749   67330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 19:58:28.683621   67330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 19:58:28.690751   67330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 19:58:28.690831   67330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 19:58:28.708219   67330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 19:58:28.708247   67330 start.go:495] detecting cgroup driver to use...
	I0818 19:58:28.708327   67330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 19:58:28.728875   67330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 19:58:28.745627   67330 docker.go:217] disabling cri-docker service (if available) ...
	I0818 19:58:28.745689   67330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 19:58:28.762042   67330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 19:58:28.777741   67330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 19:58:28.903144   67330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 19:58:29.068947   67330 docker.go:233] disabling docker service ...
	I0818 19:58:29.069004   67330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 19:58:29.084501   67330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 19:58:29.099099   67330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 19:58:29.232540   67330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 19:58:29.356028   67330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 19:58:29.372379   67330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 19:58:29.391249   67330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 19:58:29.391311   67330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:58:29.402152   67330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 19:58:29.402206   67330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:58:29.412194   67330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:58:29.427182   67330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 19:58:29.441255   67330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 19:58:29.451525   67330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 19:58:29.460654   67330 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 19:58:29.460714   67330 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 19:58:29.474283   67330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 19:58:29.483671   67330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:58:29.610463   67330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 19:58:29.760753   67330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 19:58:29.760822   67330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 19:58:29.766300   67330 start.go:563] Will wait 60s for crictl version
	I0818 19:58:29.766345   67330 ssh_runner.go:195] Run: which crictl
	I0818 19:58:29.770276   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 19:58:29.811408   67330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 19:58:29.811522   67330 ssh_runner.go:195] Run: crio --version
	I0818 19:58:29.840556   67330 ssh_runner.go:195] Run: crio --version
	I0818 19:58:29.875421   67330 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 19:58:29.876810   67330 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 19:58:29.880826   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:29.881628   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 20:58:15 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 19:58:29.881668   67330 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 19:58:29.882008   67330 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 19:58:29.886676   67330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 19:58:29.900439   67330 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 19:58:29.900539   67330 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 19:58:29.900586   67330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:58:29.936690   67330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 19:58:29.936765   67330 ssh_runner.go:195] Run: which lz4
	I0818 19:58:29.940979   67330 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 19:58:29.945294   67330 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 19:58:29.945339   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 19:58:31.683109   67330 crio.go:462] duration metric: took 1.742162722s to copy over tarball
	I0818 19:58:31.683231   67330 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 19:58:34.481757   67330 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.798495772s)
	I0818 19:58:34.481789   67330 crio.go:469] duration metric: took 2.798602184s to extract the tarball
	I0818 19:58:34.481800   67330 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 19:58:34.525798   67330 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:58:34.577951   67330 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 19:58:34.577974   67330 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 19:58:34.578041   67330 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:58:34.578063   67330 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 19:58:34.578087   67330 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:58:34.578086   67330 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 19:58:34.578041   67330 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 19:58:34.578163   67330 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:58:34.578175   67330 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 19:58:34.578308   67330 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:58:34.579901   67330 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:58:34.579948   67330 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 19:58:34.579955   67330 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 19:58:34.579956   67330 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 19:58:34.579979   67330 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 19:58:34.580025   67330 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:58:34.579906   67330 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:58:34.580253   67330 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:58:34.726710   67330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:58:34.738435   67330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 19:58:34.777740   67330 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 19:58:34.777791   67330 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:58:34.777843   67330 ssh_runner.go:195] Run: which crictl
	I0818 19:58:34.797712   67330 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 19:58:34.797754   67330 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 19:58:34.797790   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:58:34.797794   67330 ssh_runner.go:195] Run: which crictl
	I0818 19:58:34.802844   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 19:58:34.827567   67330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:58:34.854340   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:58:34.865963   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 19:58:34.917797   67330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 19:58:34.926774   67330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:58:34.929098   67330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:58:34.933396   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 19:58:34.933393   67330 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 19:58:34.933487   67330 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:58:34.933530   67330 ssh_runner.go:195] Run: which crictl
	I0818 19:58:34.946522   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 19:58:34.967234   67330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 19:58:35.083134   67330 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 19:58:35.083229   67330 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:58:35.083244   67330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 19:58:35.083249   67330 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 19:58:35.083288   67330 ssh_runner.go:195] Run: which crictl
	I0818 19:58:35.083294   67330 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 19:58:35.083307   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:58:35.083331   67330 ssh_runner.go:195] Run: which crictl
	I0818 19:58:35.083185   67330 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 19:58:35.083363   67330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 19:58:35.083367   67330 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:58:35.083408   67330 ssh_runner.go:195] Run: which crictl
	I0818 19:58:35.086725   67330 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 19:58:35.086763   67330 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 19:58:35.086801   67330 ssh_runner.go:195] Run: which crictl
	I0818 19:58:35.091958   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:58:35.139743   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 19:58:35.139784   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:58:35.139837   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:58:35.139855   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 19:58:35.153495   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:58:35.250381   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 19:58:35.250436   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 19:58:35.253218   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:58:35.253262   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 19:58:35.268027   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 19:58:35.341262   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 19:58:35.341315   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 19:58:35.367187   67330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 19:58:35.375453   67330 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 19:58:35.386153   67330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 19:58:35.433799   67330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 19:58:35.433856   67330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 19:58:35.449822   67330 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 19:58:35.516645   67330 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 19:58:35.664736   67330 cache_images.go:92] duration metric: took 1.08674885s to LoadCachedImages
	W0818 19:58:35.664826   67330 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0818 19:58:35.664840   67330 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 19:58:35.664936   67330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 19:58:35.665001   67330 ssh_runner.go:195] Run: crio config
	I0818 19:58:35.727710   67330 cni.go:84] Creating CNI manager for ""
	I0818 19:58:35.727732   67330 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 19:58:35.727744   67330 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 19:58:35.727769   67330 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 19:58:35.727904   67330 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 19:58:35.727964   67330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 19:58:35.739040   67330 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 19:58:35.739107   67330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 19:58:35.754102   67330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 19:58:35.776619   67330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 19:58:35.798837   67330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 19:58:35.820673   67330 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 19:58:35.824706   67330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 19:58:35.838562   67330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:58:35.972713   67330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:58:35.993624   67330 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 19:58:35.993646   67330 certs.go:194] generating shared ca certs ...
	I0818 19:58:35.993681   67330 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:58:35.993822   67330 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 19:58:35.993875   67330 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 19:58:35.993888   67330 certs.go:256] generating profile certs ...
	I0818 19:58:35.993952   67330 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 19:58:35.993976   67330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.crt with IP's: []
	I0818 19:58:36.145863   67330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.crt ...
	I0818 19:58:36.145898   67330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.crt: {Name:mk6d59a5ff7041e4bc80fa239aa2afc0424b99bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:58:36.146085   67330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key ...
	I0818 19:58:36.146109   67330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key: {Name:mk16be4c70d0b1657a0e6183ed79fe3dc82e0869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:58:36.146237   67330 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 19:58:36.146262   67330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt.3812b43e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.105]
	I0818 19:58:36.558825   67330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt.3812b43e ...
	I0818 19:58:36.558853   67330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt.3812b43e: {Name:mkd6ab8d33ca241cc9980a55cc8cf274230fc9c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:58:36.616604   67330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e ...
	I0818 19:58:36.616650   67330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e: {Name:mk110d186a321fbeef0e24a13d32396f8833ad45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:58:36.616805   67330 certs.go:381] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt.3812b43e -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt
	I0818 19:58:36.616908   67330 certs.go:385] copying /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e -> /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key
	I0818 19:58:36.616989   67330 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 19:58:36.617007   67330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt with IP's: []
	I0818 19:58:36.889103   67330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt ...
	I0818 19:58:36.889133   67330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt: {Name:mk6baeaa7934b12b08ad5a893ee97a2c7aa88306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:58:36.918227   67330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key ...
	I0818 19:58:36.918261   67330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key: {Name:mk538f933cd5c64c7b904132dbeddeb60517a2f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:58:36.918520   67330 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 19:58:36.918578   67330 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 19:58:36.918604   67330 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 19:58:36.918639   67330 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 19:58:36.918678   67330 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 19:58:36.918717   67330 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 19:58:36.918849   67330 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 19:58:36.919796   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 19:58:36.957524   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 19:58:36.993791   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 19:58:37.032163   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 19:58:37.073628   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 19:58:37.100720   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 19:58:37.132032   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 19:58:37.159014   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 19:58:37.192650   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 19:58:37.223734   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 19:58:37.260148   67330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 19:58:37.287346   67330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 19:58:37.307361   67330 ssh_runner.go:195] Run: openssl version
	I0818 19:58:37.313275   67330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 19:58:37.325426   67330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 19:58:37.330715   67330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 19:58:37.330768   67330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 19:58:37.336861   67330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 19:58:37.347737   67330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 19:58:37.361279   67330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 19:58:37.365888   67330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 19:58:37.365946   67330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 19:58:37.371651   67330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 19:58:37.382008   67330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 19:58:37.392652   67330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:58:37.397105   67330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:58:37.397162   67330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:58:37.402686   67330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 19:58:37.413360   67330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:58:37.417556   67330 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 19:58:37.417610   67330 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:58:37.417703   67330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 19:58:37.417765   67330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 19:58:37.460817   67330 cri.go:89] found id: ""
	I0818 19:58:37.460885   67330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 19:58:37.472242   67330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 19:58:37.481981   67330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 19:58:37.491661   67330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 19:58:37.491681   67330 kubeadm.go:157] found existing configuration files:
	
	I0818 19:58:37.491723   67330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 19:58:37.500732   67330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 19:58:37.500785   67330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 19:58:37.510832   67330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 19:58:37.523234   67330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 19:58:37.523291   67330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 19:58:37.532835   67330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 19:58:37.542350   67330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 19:58:37.542405   67330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 19:58:37.554775   67330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 19:58:37.566470   67330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 19:58:37.566531   67330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 19:58:37.579326   67330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 19:58:37.717211   67330 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 19:58:37.717293   67330 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 19:58:37.913289   67330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 19:58:37.913470   67330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 19:58:37.913624   67330 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 19:58:38.118976   67330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 19:58:38.120533   67330 out.go:235]   - Generating certificates and keys ...
	I0818 19:58:38.120632   67330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 19:58:38.120716   67330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 19:58:38.217949   67330 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0818 19:58:38.480674   67330 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0818 19:58:38.698258   67330 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0818 19:58:38.891526   67330 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0818 19:58:39.122021   67330 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0818 19:58:39.122343   67330 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-247539] and IPs [192.168.50.105 127.0.0.1 ::1]
	I0818 19:58:39.361399   67330 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0818 19:58:39.361796   67330 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-247539] and IPs [192.168.50.105 127.0.0.1 ::1]
	I0818 19:58:39.514998   67330 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0818 19:58:40.173577   67330 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0818 19:58:40.219360   67330 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0818 19:58:40.219689   67330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 19:58:40.269420   67330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 19:58:40.789377   67330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 19:58:41.236515   67330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 19:58:41.517116   67330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 19:58:41.547586   67330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 19:58:41.550758   67330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 19:58:41.550833   67330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 19:58:41.714129   67330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 19:58:41.716713   67330 out.go:235]   - Booting up control plane ...
	I0818 19:58:41.716851   67330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 19:58:41.723766   67330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 19:58:41.725262   67330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 19:58:41.726774   67330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 19:58:41.732085   67330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 19:59:21.727561   67330 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 19:59:21.728164   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:59:21.728439   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:59:26.727945   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:59:26.728254   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:59:36.728265   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:59:36.728572   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 19:59:56.728222   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 19:59:56.728521   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:00:36.729736   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:00:36.730009   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:00:36.730028   67330 kubeadm.go:310] 
	I0818 20:00:36.730079   67330 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:00:36.730164   67330 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:00:36.730190   67330 kubeadm.go:310] 
	I0818 20:00:36.730245   67330 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:00:36.730309   67330 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:00:36.730465   67330 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:00:36.730474   67330 kubeadm.go:310] 
	I0818 20:00:36.730572   67330 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:00:36.730617   67330 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:00:36.730646   67330 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:00:36.730653   67330 kubeadm.go:310] 
	I0818 20:00:36.730790   67330 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:00:36.730904   67330 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:00:36.730919   67330 kubeadm.go:310] 
	I0818 20:00:36.731072   67330 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:00:36.731205   67330 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:00:36.731305   67330 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:00:36.731399   67330 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:00:36.731418   67330 kubeadm.go:310] 
	I0818 20:00:36.732553   67330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:00:36.732653   67330 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:00:36.732721   67330 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0818 20:00:36.732859   67330 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-247539] and IPs [192.168.50.105 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-247539] and IPs [192.168.50.105 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-247539] and IPs [192.168.50.105 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-247539] and IPs [192.168.50.105 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:00:36.732899   67330 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:00:37.224328   67330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:00:37.238466   67330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:00:37.248271   67330 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:00:37.248291   67330 kubeadm.go:157] found existing configuration files:
	
	I0818 20:00:37.248334   67330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:00:37.257513   67330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:00:37.257576   67330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:00:37.267081   67330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:00:37.276095   67330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:00:37.276157   67330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:00:37.285237   67330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:00:37.293838   67330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:00:37.293889   67330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:00:37.303426   67330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:00:37.312542   67330 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:00:37.312604   67330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:00:37.323453   67330 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:00:37.531159   67330 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:02:33.720024   67330 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:02:33.720108   67330 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:02:33.721898   67330 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:02:33.721982   67330 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:02:33.722082   67330 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:02:33.722166   67330 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:02:33.722263   67330 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:02:33.722314   67330 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:02:33.724278   67330 out.go:235]   - Generating certificates and keys ...
	I0818 20:02:33.724340   67330 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:02:33.724402   67330 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:02:33.724466   67330 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:02:33.724513   67330 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:02:33.724567   67330 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:02:33.724608   67330 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:02:33.724679   67330 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:02:33.724732   67330 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:02:33.724788   67330 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:02:33.724852   67330 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:02:33.724885   67330 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:02:33.724928   67330 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:02:33.724974   67330 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:02:33.725018   67330 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:02:33.725083   67330 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:02:33.725143   67330 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:02:33.725237   67330 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:02:33.725345   67330 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:02:33.725406   67330 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:02:33.725489   67330 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:02:33.727068   67330 out.go:235]   - Booting up control plane ...
	I0818 20:02:33.727159   67330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:02:33.727223   67330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:02:33.727285   67330 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:02:33.727362   67330 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:02:33.727516   67330 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:02:33.727563   67330 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:02:33.727622   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:02:33.727785   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:02:33.727877   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:02:33.728053   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:02:33.728141   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:02:33.728322   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:02:33.728415   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:02:33.728649   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:02:33.728750   67330 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:02:33.728917   67330 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:02:33.728925   67330 kubeadm.go:310] 
	I0818 20:02:33.728958   67330 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:02:33.728992   67330 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:02:33.728999   67330 kubeadm.go:310] 
	I0818 20:02:33.729038   67330 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:02:33.729085   67330 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:02:33.729201   67330 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:02:33.729209   67330 kubeadm.go:310] 
	I0818 20:02:33.729322   67330 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:02:33.729376   67330 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:02:33.729425   67330 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:02:33.729434   67330 kubeadm.go:310] 
	I0818 20:02:33.729550   67330 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:02:33.729650   67330 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:02:33.729662   67330 kubeadm.go:310] 
	I0818 20:02:33.729781   67330 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:02:33.729890   67330 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:02:33.730004   67330 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:02:33.730086   67330 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:02:33.730111   67330 kubeadm.go:310] 
	I0818 20:02:33.730154   67330 kubeadm.go:394] duration metric: took 3m56.312548713s to StartCluster
	I0818 20:02:33.730186   67330 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:02:33.730233   67330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:02:33.786325   67330 cri.go:89] found id: ""
	I0818 20:02:33.786363   67330 logs.go:276] 0 containers: []
	W0818 20:02:33.786372   67330 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:02:33.786377   67330 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:02:33.786434   67330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:02:33.826899   67330 cri.go:89] found id: ""
	I0818 20:02:33.826923   67330 logs.go:276] 0 containers: []
	W0818 20:02:33.826930   67330 logs.go:278] No container was found matching "etcd"
	I0818 20:02:33.826936   67330 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:02:33.826982   67330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:02:33.860140   67330 cri.go:89] found id: ""
	I0818 20:02:33.860164   67330 logs.go:276] 0 containers: []
	W0818 20:02:33.860175   67330 logs.go:278] No container was found matching "coredns"
	I0818 20:02:33.860195   67330 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:02:33.860257   67330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:02:33.899777   67330 cri.go:89] found id: ""
	I0818 20:02:33.899802   67330 logs.go:276] 0 containers: []
	W0818 20:02:33.899810   67330 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:02:33.899816   67330 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:02:33.899877   67330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:02:33.934477   67330 cri.go:89] found id: ""
	I0818 20:02:33.934505   67330 logs.go:276] 0 containers: []
	W0818 20:02:33.934516   67330 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:02:33.934523   67330 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:02:33.934583   67330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:02:33.973173   67330 cri.go:89] found id: ""
	I0818 20:02:33.973207   67330 logs.go:276] 0 containers: []
	W0818 20:02:33.973217   67330 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:02:33.973225   67330 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:02:33.973283   67330 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:02:34.007941   67330 cri.go:89] found id: ""
	I0818 20:02:34.007973   67330 logs.go:276] 0 containers: []
	W0818 20:02:34.007983   67330 logs.go:278] No container was found matching "kindnet"
	I0818 20:02:34.007994   67330 logs.go:123] Gathering logs for kubelet ...
	I0818 20:02:34.008005   67330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:02:34.059983   67330 logs.go:123] Gathering logs for dmesg ...
	I0818 20:02:34.060020   67330 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:02:34.074180   67330 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:02:34.074209   67330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:02:34.190709   67330 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:02:34.190731   67330 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:02:34.190743   67330 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:02:34.299742   67330 logs.go:123] Gathering logs for container status ...
	I0818 20:02:34.299781   67330 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:02:34.337617   67330 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:02:34.337685   67330 out.go:270] * 
	* 
	W0818 20:02:34.337738   67330 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:02:34.337755   67330 out.go:270] * 
	* 
	W0818 20:02:34.338537   67330 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:02:34.341919   67330 out.go:201] 
	W0818 20:02:34.343196   67330 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:02:34.343278   67330 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:02:34.343305   67330 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:02:34.344889   67330 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-247539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 6 (222.756276ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:02:34.615457   73363 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-247539" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (277.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-944426 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-944426 --alsologtostderr -v=3: exit status 82 (2m0.772967092s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-944426"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 20:00:23.650114   71952 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:00:23.650358   71952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:00:23.650367   71952 out.go:358] Setting ErrFile to fd 2...
	I0818 20:00:23.650372   71952 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:00:23.650536   71952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:00:23.650809   71952 out.go:352] Setting JSON to false
	I0818 20:00:23.650902   71952 mustload.go:65] Loading cluster: no-preload-944426
	I0818 20:00:23.651351   71952 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:00:23.651465   71952 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:00:23.651696   71952 mustload.go:65] Loading cluster: no-preload-944426
	I0818 20:00:23.651855   71952 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:00:23.651893   71952 stop.go:39] StopHost: no-preload-944426
	I0818 20:00:23.652429   71952 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:00:23.652480   71952 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:00:23.667007   71952 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42829
	I0818 20:00:23.667488   71952 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:00:23.668019   71952 main.go:141] libmachine: Using API Version  1
	I0818 20:00:23.668042   71952 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:00:23.668384   71952 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:00:23.670912   71952 out.go:177] * Stopping node "no-preload-944426"  ...
	I0818 20:00:23.672253   71952 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 20:00:23.672286   71952 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:00:23.672502   71952 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 20:00:23.672523   71952 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:00:23.675836   71952 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:00:23.676230   71952 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 20:58:44 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:00:23.676269   71952 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:00:23.676438   71952 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:00:23.676614   71952 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:00:23.676779   71952 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:00:23.676939   71952 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:00:23.773302   71952 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0818 20:00:23.840272   71952 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0818 20:00:23.920311   71952 main.go:141] libmachine: Stopping "no-preload-944426"...
	I0818 20:00:23.920350   71952 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:00:23.922341   71952 main.go:141] libmachine: (no-preload-944426) Calling .Stop
	I0818 20:00:23.925955   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 0/120
	I0818 20:00:24.927665   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 1/120
	I0818 20:00:25.929087   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 2/120
	I0818 20:00:26.930508   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 3/120
	I0818 20:00:27.932019   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 4/120
	I0818 20:00:28.933574   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 5/120
	I0818 20:00:29.935551   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 6/120
	I0818 20:00:30.937270   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 7/120
	I0818 20:00:31.938907   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 8/120
	I0818 20:00:32.940678   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 9/120
	I0818 20:00:33.943121   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 10/120
	I0818 20:00:34.944593   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 11/120
	I0818 20:00:35.946098   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 12/120
	I0818 20:00:36.947532   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 13/120
	I0818 20:00:37.949053   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 14/120
	I0818 20:00:38.951028   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 15/120
	I0818 20:00:39.952666   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 16/120
	I0818 20:00:40.954128   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 17/120
	I0818 20:00:41.955538   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 18/120
	I0818 20:00:42.957765   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 19/120
	I0818 20:00:43.959937   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 20/120
	I0818 20:00:44.962227   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 21/120
	I0818 20:00:45.963419   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 22/120
	I0818 20:00:46.964839   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 23/120
	I0818 20:00:47.966995   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 24/120
	I0818 20:00:48.968898   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 25/120
	I0818 20:00:49.970463   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 26/120
	I0818 20:00:50.972042   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 27/120
	I0818 20:00:51.973489   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 28/120
	I0818 20:00:52.974942   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 29/120
	I0818 20:00:53.977016   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 30/120
	I0818 20:00:54.978492   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 31/120
	I0818 20:00:55.979756   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 32/120
	I0818 20:00:56.982260   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 33/120
	I0818 20:00:57.983872   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 34/120
	I0818 20:00:58.985942   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 35/120
	I0818 20:00:59.987489   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 36/120
	I0818 20:01:00.989218   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 37/120
	I0818 20:01:01.991010   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 38/120
	I0818 20:01:02.992835   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 39/120
	I0818 20:01:03.994777   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 40/120
	I0818 20:01:04.997088   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 41/120
	I0818 20:01:05.998630   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 42/120
	I0818 20:01:07.247592   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 43/120
	I0818 20:01:08.249670   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 44/120
	I0818 20:01:09.251892   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 45/120
	I0818 20:01:10.253847   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 46/120
	I0818 20:01:11.255490   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 47/120
	I0818 20:01:12.256986   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 48/120
	I0818 20:01:13.258377   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 49/120
	I0818 20:01:14.259823   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 50/120
	I0818 20:01:15.261443   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 51/120
	I0818 20:01:16.262929   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 52/120
	I0818 20:01:17.264663   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 53/120
	I0818 20:01:18.266082   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 54/120
	I0818 20:01:19.268396   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 55/120
	I0818 20:01:20.269698   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 56/120
	I0818 20:01:21.271064   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 57/120
	I0818 20:01:22.272397   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 58/120
	I0818 20:01:23.273930   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 59/120
	I0818 20:01:24.275975   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 60/120
	I0818 20:01:25.277843   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 61/120
	I0818 20:01:26.279562   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 62/120
	I0818 20:01:27.280981   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 63/120
	I0818 20:01:28.282361   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 64/120
	I0818 20:01:29.284408   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 65/120
	I0818 20:01:30.285890   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 66/120
	I0818 20:01:31.287170   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 67/120
	I0818 20:01:32.288342   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 68/120
	I0818 20:01:33.290031   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 69/120
	I0818 20:01:34.292358   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 70/120
	I0818 20:01:35.294057   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 71/120
	I0818 20:01:36.295807   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 72/120
	I0818 20:01:37.297273   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 73/120
	I0818 20:01:38.298596   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 74/120
	I0818 20:01:39.300456   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 75/120
	I0818 20:01:40.301860   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 76/120
	I0818 20:01:41.303035   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 77/120
	I0818 20:01:42.304871   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 78/120
	I0818 20:01:43.306210   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 79/120
	I0818 20:01:44.308463   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 80/120
	I0818 20:01:45.309767   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 81/120
	I0818 20:01:46.311150   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 82/120
	I0818 20:01:47.312623   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 83/120
	I0818 20:01:48.314085   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 84/120
	I0818 20:01:49.316017   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 85/120
	I0818 20:01:50.317544   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 86/120
	I0818 20:01:51.319078   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 87/120
	I0818 20:01:52.320649   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 88/120
	I0818 20:01:53.322410   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 89/120
	I0818 20:01:54.324961   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 90/120
	I0818 20:01:55.326373   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 91/120
	I0818 20:01:56.327704   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 92/120
	I0818 20:01:57.328959   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 93/120
	I0818 20:01:58.330120   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 94/120
	I0818 20:01:59.331949   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 95/120
	I0818 20:02:00.334078   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 96/120
	I0818 20:02:01.335788   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 97/120
	I0818 20:02:02.338125   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 98/120
	I0818 20:02:03.340176   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 99/120
	I0818 20:02:04.342595   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 100/120
	I0818 20:02:05.344039   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 101/120
	I0818 20:02:06.345370   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 102/120
	I0818 20:02:07.346933   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 103/120
	I0818 20:02:08.348355   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 104/120
	I0818 20:02:09.350430   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 105/120
	I0818 20:02:10.352228   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 106/120
	I0818 20:02:11.353833   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 107/120
	I0818 20:02:12.355142   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 108/120
	I0818 20:02:13.356560   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 109/120
	I0818 20:02:14.359041   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 110/120
	I0818 20:02:15.360456   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 111/120
	I0818 20:02:16.361875   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 112/120
	I0818 20:02:17.363423   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 113/120
	I0818 20:02:18.364826   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 114/120
	I0818 20:02:19.367084   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 115/120
	I0818 20:02:20.368508   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 116/120
	I0818 20:02:21.370074   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 117/120
	I0818 20:02:22.371462   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 118/120
	I0818 20:02:23.373005   71952 main.go:141] libmachine: (no-preload-944426) Waiting for machine to stop 119/120
	I0818 20:02:24.373723   71952 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0818 20:02:24.373793   71952 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0818 20:02:24.375778   71952 out.go:201] 
	W0818 20:02:24.377202   71952 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0818 20:02:24.377217   71952 out.go:270] * 
	* 
	W0818 20:02:24.379698   71952 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:02:24.381276   71952 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-944426 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944426 -n no-preload-944426
E0818 20:02:27.262570   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:29.643476   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:29.649842   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:29.661341   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:29.682681   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:29.724140   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:29.805608   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:29.967164   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:30.288866   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:30.930410   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944426 -n no-preload-944426: exit status 3 (18.46071859s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:02:42.843707   73269 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host
	E0818 20:02:42.843735   73269 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-944426" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-291295 --alsologtostderr -v=3
E0818 20:00:34.266125   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:48.743463   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:53.949924   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:53.956314   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:53.967715   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:53.989324   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:54.030968   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:54.112421   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:54.274397   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:54.596613   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:54.748158   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:55.238031   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:56.520416   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:59.082261   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-291295 --alsologtostderr -v=3: exit status 82 (2m0.507825007s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-291295"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 20:00:31.201922   72083 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:00:31.202222   72083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:00:31.202233   72083 out.go:358] Setting ErrFile to fd 2...
	I0818 20:00:31.202240   72083 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:00:31.202553   72083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:00:31.202881   72083 out.go:352] Setting JSON to false
	I0818 20:00:31.202982   72083 mustload.go:65] Loading cluster: embed-certs-291295
	I0818 20:00:31.203487   72083 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:00:31.203605   72083 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:00:31.203828   72083 mustload.go:65] Loading cluster: embed-certs-291295
	I0818 20:00:31.203997   72083 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:00:31.204029   72083 stop.go:39] StopHost: embed-certs-291295
	I0818 20:00:31.204617   72083 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:00:31.204674   72083 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:00:31.220012   72083 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I0818 20:00:31.220540   72083 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:00:31.221237   72083 main.go:141] libmachine: Using API Version  1
	I0818 20:00:31.221267   72083 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:00:31.221770   72083 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:00:31.223969   72083 out.go:177] * Stopping node "embed-certs-291295"  ...
	I0818 20:00:31.225317   72083 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 20:00:31.225349   72083 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:00:31.225614   72083 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 20:00:31.225642   72083 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:00:31.229024   72083 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:00:31.229517   72083 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 20:59:12 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:00:31.229550   72083 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:00:31.229742   72083 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:00:31.229919   72083 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:00:31.230094   72083 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:00:31.230236   72083 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:00:31.327517   72083 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0818 20:00:31.394224   72083 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0818 20:00:31.449206   72083 main.go:141] libmachine: Stopping "embed-certs-291295"...
	I0818 20:00:31.449259   72083 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:00:31.450819   72083 main.go:141] libmachine: (embed-certs-291295) Calling .Stop
	I0818 20:00:31.454382   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 0/120
	I0818 20:00:32.456210   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 1/120
	I0818 20:00:33.457703   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 2/120
	I0818 20:00:34.459495   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 3/120
	I0818 20:00:35.461127   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 4/120
	I0818 20:00:36.463433   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 5/120
	I0818 20:00:37.464844   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 6/120
	I0818 20:00:38.466224   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 7/120
	I0818 20:00:39.467469   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 8/120
	I0818 20:00:40.468926   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 9/120
	I0818 20:00:41.470392   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 10/120
	I0818 20:00:42.471784   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 11/120
	I0818 20:00:43.473839   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 12/120
	I0818 20:00:44.475436   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 13/120
	I0818 20:00:45.476768   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 14/120
	I0818 20:00:46.478301   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 15/120
	I0818 20:00:47.479864   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 16/120
	I0818 20:00:48.481817   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 17/120
	I0818 20:00:49.483084   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 18/120
	I0818 20:00:50.484449   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 19/120
	I0818 20:00:51.486513   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 20/120
	I0818 20:00:52.488094   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 21/120
	I0818 20:00:53.489722   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 22/120
	I0818 20:00:54.491253   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 23/120
	I0818 20:00:55.493705   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 24/120
	I0818 20:00:56.495979   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 25/120
	I0818 20:00:57.497901   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 26/120
	I0818 20:00:58.499532   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 27/120
	I0818 20:00:59.501968   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 28/120
	I0818 20:01:00.503510   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 29/120
	I0818 20:01:01.505333   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 30/120
	I0818 20:01:02.506961   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 31/120
	I0818 20:01:03.508920   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 32/120
	I0818 20:01:04.510230   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 33/120
	I0818 20:01:05.511754   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 34/120
	I0818 20:01:06.513730   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 35/120
	I0818 20:01:07.515120   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 36/120
	I0818 20:01:08.516673   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 37/120
	I0818 20:01:09.518296   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 38/120
	I0818 20:01:10.519877   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 39/120
	I0818 20:01:11.521936   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 40/120
	I0818 20:01:12.524223   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 41/120
	I0818 20:01:13.525889   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 42/120
	I0818 20:01:14.527141   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 43/120
	I0818 20:01:15.528620   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 44/120
	I0818 20:01:16.530544   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 45/120
	I0818 20:01:17.532076   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 46/120
	I0818 20:01:18.533480   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 47/120
	I0818 20:01:19.535114   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 48/120
	I0818 20:01:20.537264   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 49/120
	I0818 20:01:21.539294   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 50/120
	I0818 20:01:22.540549   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 51/120
	I0818 20:01:23.541753   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 52/120
	I0818 20:01:24.543398   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 53/120
	I0818 20:01:25.544763   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 54/120
	I0818 20:01:26.546865   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 55/120
	I0818 20:01:27.548354   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 56/120
	I0818 20:01:28.549880   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 57/120
	I0818 20:01:29.551217   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 58/120
	I0818 20:01:30.552492   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 59/120
	I0818 20:01:31.554631   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 60/120
	I0818 20:01:32.555862   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 61/120
	I0818 20:01:33.557514   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 62/120
	I0818 20:01:34.559397   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 63/120
	I0818 20:01:35.560886   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 64/120
	I0818 20:01:36.562870   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 65/120
	I0818 20:01:37.564369   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 66/120
	I0818 20:01:38.565955   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 67/120
	I0818 20:01:39.567289   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 68/120
	I0818 20:01:40.568587   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 69/120
	I0818 20:01:41.570859   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 70/120
	I0818 20:01:42.572382   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 71/120
	I0818 20:01:43.573797   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 72/120
	I0818 20:01:44.575299   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 73/120
	I0818 20:01:45.576895   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 74/120
	I0818 20:01:46.578927   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 75/120
	I0818 20:01:47.580230   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 76/120
	I0818 20:01:48.581595   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 77/120
	I0818 20:01:49.583066   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 78/120
	I0818 20:01:50.584681   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 79/120
	I0818 20:01:51.586673   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 80/120
	I0818 20:01:52.588092   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 81/120
	I0818 20:01:53.589843   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 82/120
	I0818 20:01:54.591198   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 83/120
	I0818 20:01:55.592433   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 84/120
	I0818 20:01:56.594146   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 85/120
	I0818 20:01:57.595603   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 86/120
	I0818 20:01:58.597858   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 87/120
	I0818 20:01:59.599273   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 88/120
	I0818 20:02:00.600728   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 89/120
	I0818 20:02:01.603172   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 90/120
	I0818 20:02:02.605628   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 91/120
	I0818 20:02:03.606939   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 92/120
	I0818 20:02:04.608397   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 93/120
	I0818 20:02:05.609766   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 94/120
	I0818 20:02:06.611924   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 95/120
	I0818 20:02:07.613720   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 96/120
	I0818 20:02:08.615111   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 97/120
	I0818 20:02:09.616406   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 98/120
	I0818 20:02:10.617609   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 99/120
	I0818 20:02:11.619829   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 100/120
	I0818 20:02:12.621229   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 101/120
	I0818 20:02:13.622721   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 102/120
	I0818 20:02:14.624215   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 103/120
	I0818 20:02:15.625649   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 104/120
	I0818 20:02:16.627813   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 105/120
	I0818 20:02:17.629265   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 106/120
	I0818 20:02:18.630657   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 107/120
	I0818 20:02:19.632066   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 108/120
	I0818 20:02:20.633438   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 109/120
	I0818 20:02:21.635784   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 110/120
	I0818 20:02:22.637545   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 111/120
	I0818 20:02:23.638910   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 112/120
	I0818 20:02:24.640345   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 113/120
	I0818 20:02:25.641764   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 114/120
	I0818 20:02:26.643798   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 115/120
	I0818 20:02:27.645094   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 116/120
	I0818 20:02:28.646318   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 117/120
	I0818 20:02:29.647617   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 118/120
	I0818 20:02:30.649296   72083 main.go:141] libmachine: (embed-certs-291295) Waiting for machine to stop 119/120
	I0818 20:02:31.650438   72083 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0818 20:02:31.650500   72083 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0818 20:02:31.652315   72083 out.go:201] 
	W0818 20:02:31.653587   72083 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0818 20:02:31.653615   72083 out.go:270] * 
	* 
	W0818 20:02:31.656188   72083 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:02:31.657486   72083 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-291295 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291295 -n embed-certs-291295
E0818 20:02:32.212594   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291295 -n embed-certs-291295: exit status 3 (18.608186065s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:02:50.267651   73315 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0818 20:02:50.267670   73315 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-291295" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-852598 --alsologtostderr -v=3
E0818 20:02:15.889260   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-852598 --alsologtostderr -v=3: exit status 82 (2m0.461610155s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-852598"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 20:02:11.405269   73202 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:02:11.405375   73202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:02:11.405384   73202 out.go:358] Setting ErrFile to fd 2...
	I0818 20:02:11.405388   73202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:02:11.405564   73202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:02:11.405764   73202 out.go:352] Setting JSON to false
	I0818 20:02:11.405839   73202 mustload.go:65] Loading cluster: default-k8s-diff-port-852598
	I0818 20:02:11.406902   73202 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:02:11.407044   73202 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:02:11.407541   73202 mustload.go:65] Loading cluster: default-k8s-diff-port-852598
	I0818 20:02:11.407733   73202 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:02:11.407781   73202 stop.go:39] StopHost: default-k8s-diff-port-852598
	I0818 20:02:11.408199   73202 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:02:11.408251   73202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:02:11.423692   73202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43941
	I0818 20:02:11.424122   73202 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:02:11.424696   73202 main.go:141] libmachine: Using API Version  1
	I0818 20:02:11.424722   73202 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:02:11.425055   73202 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:02:11.427452   73202 out.go:177] * Stopping node "default-k8s-diff-port-852598"  ...
	I0818 20:02:11.429138   73202 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0818 20:02:11.429164   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:02:11.429429   73202 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0818 20:02:11.429464   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:02:11.432268   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:02:11.432784   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:01:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:02:11.432811   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:02:11.432954   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:02:11.433145   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:02:11.433308   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:02:11.433435   73202 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:02:11.523716   73202 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0818 20:02:11.581506   73202 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0818 20:02:11.623276   73202 main.go:141] libmachine: Stopping "default-k8s-diff-port-852598"...
	I0818 20:02:11.623313   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:02:11.624679   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Stop
	I0818 20:02:11.627848   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 0/120
	I0818 20:02:12.629775   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 1/120
	I0818 20:02:13.630830   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 2/120
	I0818 20:02:14.632073   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 3/120
	I0818 20:02:15.633746   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 4/120
	I0818 20:02:16.635782   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 5/120
	I0818 20:02:17.637859   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 6/120
	I0818 20:02:18.639102   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 7/120
	I0818 20:02:19.640173   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 8/120
	I0818 20:02:20.641740   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 9/120
	I0818 20:02:21.643822   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 10/120
	I0818 20:02:22.645757   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 11/120
	I0818 20:02:23.646974   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 12/120
	I0818 20:02:24.648052   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 13/120
	I0818 20:02:25.649729   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 14/120
	I0818 20:02:26.651677   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 15/120
	I0818 20:02:27.652777   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 16/120
	I0818 20:02:28.654012   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 17/120
	I0818 20:02:29.655111   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 18/120
	I0818 20:02:30.656899   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 19/120
	I0818 20:02:31.659101   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 20/120
	I0818 20:02:32.660351   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 21/120
	I0818 20:02:33.661977   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 22/120
	I0818 20:02:34.663212   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 23/120
	I0818 20:02:35.664442   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 24/120
	I0818 20:02:36.666432   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 25/120
	I0818 20:02:37.668088   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 26/120
	I0818 20:02:38.669496   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 27/120
	I0818 20:02:39.671103   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 28/120
	I0818 20:02:40.672499   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 29/120
	I0818 20:02:41.674983   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 30/120
	I0818 20:02:42.676457   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 31/120
	I0818 20:02:43.677886   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 32/120
	I0818 20:02:44.679771   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 33/120
	I0818 20:02:45.681465   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 34/120
	I0818 20:02:46.683700   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 35/120
	I0818 20:02:47.685168   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 36/120
	I0818 20:02:48.686787   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 37/120
	I0818 20:02:49.688024   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 38/120
	I0818 20:02:50.689309   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 39/120
	I0818 20:02:51.691519   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 40/120
	I0818 20:02:52.693034   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 41/120
	I0818 20:02:53.694365   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 42/120
	I0818 20:02:54.695664   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 43/120
	I0818 20:02:55.697102   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 44/120
	I0818 20:02:56.698998   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 45/120
	I0818 20:02:57.700497   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 46/120
	I0818 20:02:58.701961   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 47/120
	I0818 20:02:59.703081   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 48/120
	I0818 20:03:00.704534   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 49/120
	I0818 20:03:01.706875   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 50/120
	I0818 20:03:02.708238   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 51/120
	I0818 20:03:03.709665   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 52/120
	I0818 20:03:04.711040   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 53/120
	I0818 20:03:05.712528   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 54/120
	I0818 20:03:06.714717   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 55/120
	I0818 20:03:07.716124   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 56/120
	I0818 20:03:08.717537   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 57/120
	I0818 20:03:09.718807   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 58/120
	I0818 20:03:10.720284   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 59/120
	I0818 20:03:11.721531   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 60/120
	I0818 20:03:12.722793   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 61/120
	I0818 20:03:13.724467   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 62/120
	I0818 20:03:14.725805   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 63/120
	I0818 20:03:15.727223   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 64/120
	I0818 20:03:16.729291   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 65/120
	I0818 20:03:17.730584   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 66/120
	I0818 20:03:18.732129   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 67/120
	I0818 20:03:19.733756   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 68/120
	I0818 20:03:20.735195   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 69/120
	I0818 20:03:21.736515   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 70/120
	I0818 20:03:22.738073   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 71/120
	I0818 20:03:23.739716   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 72/120
	I0818 20:03:24.742001   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 73/120
	I0818 20:03:25.743950   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 74/120
	I0818 20:03:26.746060   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 75/120
	I0818 20:03:27.747467   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 76/120
	I0818 20:03:28.748973   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 77/120
	I0818 20:03:29.750435   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 78/120
	I0818 20:03:30.751834   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 79/120
	I0818 20:03:31.754072   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 80/120
	I0818 20:03:32.755327   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 81/120
	I0818 20:03:33.757034   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 82/120
	I0818 20:03:34.758639   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 83/120
	I0818 20:03:35.760148   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 84/120
	I0818 20:03:36.762119   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 85/120
	I0818 20:03:37.763504   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 86/120
	I0818 20:03:38.764836   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 87/120
	I0818 20:03:39.766331   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 88/120
	I0818 20:03:40.767648   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 89/120
	I0818 20:03:41.769894   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 90/120
	I0818 20:03:42.771814   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 91/120
	I0818 20:03:43.773759   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 92/120
	I0818 20:03:44.775130   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 93/120
	I0818 20:03:45.776641   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 94/120
	I0818 20:03:46.778556   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 95/120
	I0818 20:03:47.780074   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 96/120
	I0818 20:03:48.781414   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 97/120
	I0818 20:03:49.782667   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 98/120
	I0818 20:03:50.784065   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 99/120
	I0818 20:03:51.786612   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 100/120
	I0818 20:03:52.787843   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 101/120
	I0818 20:03:53.789103   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 102/120
	I0818 20:03:54.790498   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 103/120
	I0818 20:03:55.792046   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 104/120
	I0818 20:03:56.794047   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 105/120
	I0818 20:03:57.795366   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 106/120
	I0818 20:03:58.796770   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 107/120
	I0818 20:03:59.798350   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 108/120
	I0818 20:04:00.799780   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 109/120
	I0818 20:04:01.802004   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 110/120
	I0818 20:04:02.803514   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 111/120
	I0818 20:04:03.804771   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 112/120
	I0818 20:04:04.806215   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 113/120
	I0818 20:04:05.807570   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 114/120
	I0818 20:04:06.809590   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 115/120
	I0818 20:04:07.811045   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 116/120
	I0818 20:04:08.812418   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 117/120
	I0818 20:04:09.813791   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 118/120
	I0818 20:04:10.815157   73202 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for machine to stop 119/120
	I0818 20:04:11.816383   73202 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0818 20:04:11.816455   73202 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0818 20:04:11.818700   73202 out.go:201] 
	W0818 20:04:11.820418   73202 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0818 20:04:11.820436   73202 out.go:270] * 
	* 
	W0818 20:04:11.823129   73202 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:04:11.824683   73202 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-852598 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
E0818 20:04:13.639161   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:04:26.646882   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:04:30.145922   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598: exit status 3 (18.536579859s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:04:30.363687   74104 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host
	E0818 20:04:30.363711   74104 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-852598" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-247539 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-247539 create -f testdata/busybox.yaml: exit status 1 (42.762952ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-247539" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-247539 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539
E0818 20:02:34.774940   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 6 (220.46268ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:02:34.880252   73404 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-247539" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 6 (216.62178ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:02:35.096735   73434 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-247539" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-247539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0818 20:02:39.896626   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-247539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m56.046417575s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-247539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-247539 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-247539 describe deploy/metrics-server -n kube-system: exit status 1 (43.475016ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-247539" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-247539 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 6 (223.954711ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:04:31.410046   74239 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-247539" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944426 -n no-preload-944426
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944426 -n no-preload-944426: exit status 3 (3.16813585s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:02:46.011698   73546 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host
	E0818 20:02:46.011718   73546 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-944426 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0818 20:02:50.138776   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-944426 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152916911s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-944426 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944426 -n no-preload-944426
E0818 20:02:52.346956   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:52.988506   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944426 -n no-preload-944426: exit status 3 (3.062376363s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:02:55.227790   73664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host
	E0818 20:02:55.227809   73664 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.228:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-944426" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291295 -n embed-certs-291295
E0818 20:02:51.626641   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:51.701883   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:51.708288   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:51.719634   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:51.740983   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:51.782379   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:51.863833   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:52.025361   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291295 -n embed-certs-291295: exit status 3 (3.1677903s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:02:53.435672   73618 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0818 20:02:53.435693   73618 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-291295 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0818 20:02:54.270658   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-291295 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152001616s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-291295 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291295 -n embed-certs-291295
E0818 20:03:01.954404   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291295 -n embed-certs-291295: exit status 3 (3.063971936s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:03:02.651850   73768 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0818 20:03:02.651873   73768 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-291295" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598: exit status 3 (3.167849465s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:04:33.531735   74199 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host
	E0818 20:04:33.531755   74199 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-852598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-852598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152292814s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-852598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598: exit status 3 (3.063384464s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 20:04:42.747690   74439 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host
	E0818 20:04:42.747707   74439 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.111:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-852598" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (705.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-247539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0818 20:04:38.150067   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-247539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m42.198708371s)

                                                
                                                
-- stdout --
	* [old-k8s-version-247539] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-247539" primary control-plane node in "old-k8s-version-247539" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 20:04:35.931651   74389 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:35.931765   74389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:35.931775   74389 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:35.931782   74389 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:35.931954   74389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:35.932489   74389 out.go:352] Setting JSON to false
	I0818 20:04:35.933387   74389 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6420,"bootTime":1724005056,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:35.933443   74389 start.go:139] virtualization: kvm guest
	I0818 20:04:35.936190   74389 out.go:177] * [old-k8s-version-247539] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:35.937556   74389 notify.go:220] Checking for updates...
	I0818 20:04:35.937583   74389 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:35.938962   74389 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:35.940356   74389 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:35.941655   74389 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:35.943055   74389 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:35.944339   74389 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:35.945859   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:04:35.946265   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:35.946332   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:35.960999   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I0818 20:04:35.961434   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:35.961925   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:04:35.961945   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:35.962244   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:35.962391   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:04:35.964049   74389 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0818 20:04:35.965243   74389 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:35.965542   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:35.965577   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:35.979946   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I0818 20:04:35.980302   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:35.980737   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:04:35.980752   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:35.981032   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:35.981223   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:04:36.014970   74389 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:36.016044   74389 start.go:297] selected driver: kvm2
	I0818 20:04:36.016055   74389 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:36.016160   74389 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:36.016806   74389 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:36.016869   74389 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:36.031109   74389 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:36.031479   74389 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:36.031550   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:04:36.031563   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:36.031599   74389 start.go:340] cluster config:
	{Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:36.031691   74389 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:36.033349   74389 out.go:177] * Starting "old-k8s-version-247539" primary control-plane node in "old-k8s-version-247539" cluster
	I0818 20:04:36.034654   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:04:36.034719   74389 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:36.034728   74389 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:36.034811   74389 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:36.034824   74389 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0818 20:04:36.034909   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:04:36.035084   74389 start.go:360] acquireMachinesLock for old-k8s-version-247539: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	* 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	* 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-247539 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 2 (226.909542ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-247539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-247539 logs -n 25: (1.567588966s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-944426             | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-868662                  | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-868662 --memory=2200 --alsologtostderr   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-291295            | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-868662 image list                           | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:02 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-852598  | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-247539        | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944426                  | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-291295                 | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:03 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-852598       | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-247539             | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:13 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 20:04:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 20:04:42.787579   74485 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:42.787666   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787673   74485 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:42.787677   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787847   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:42.788352   74485 out.go:352] Setting JSON to false
	I0818 20:04:42.789201   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6427,"bootTime":1724005056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:42.789257   74485 start.go:139] virtualization: kvm guest
	I0818 20:04:42.791538   74485 out.go:177] * [default-k8s-diff-port-852598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:42.793185   74485 notify.go:220] Checking for updates...
	I0818 20:04:42.793204   74485 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:42.794555   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:42.795955   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:42.797158   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:42.798459   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:42.799775   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:42.801373   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:04:42.801763   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.801823   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.816564   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0818 20:04:42.816964   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.817465   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.817486   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.817807   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.818015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.818224   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:42.818511   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.818540   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.832964   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0818 20:04:42.833369   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.833866   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.833895   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.834252   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.834438   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.867522   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:42.868931   74485 start.go:297] selected driver: kvm2
	I0818 20:04:42.868948   74485 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.869074   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:42.869754   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.869835   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:42.884983   74485 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:42.885345   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:42.885408   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:04:42.885421   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:42.885450   74485 start.go:340] cluster config:
	{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.885567   74485 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.887511   74485 out.go:177] * Starting "default-k8s-diff-port-852598" primary control-plane node in "default-k8s-diff-port-852598" cluster
	I0818 20:04:42.011628   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:45.083629   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:42.888803   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:04:42.888828   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:42.888834   74485 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:42.888903   74485 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:42.888913   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 20:04:42.888991   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:04:42.889163   74485 start.go:360] acquireMachinesLock for default-k8s-diff-port-852598: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:04:51.163614   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:54.235770   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:00.315808   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:03.387719   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:09.467686   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:12.539667   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:18.619652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:21.691652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:27.771635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:30.843627   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:36.923644   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:39.995678   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:46.075611   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:49.147665   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:55.227683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:58.299638   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:04.379690   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:07.451735   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:13.531669   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:16.603729   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:22.683639   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:25.755659   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:31.835708   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:34.907693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:40.987635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:44.059673   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:50.139693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:53.211683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:59.291707   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:02.363660   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:08.443634   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:11.515633   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:17.595640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:20.667689   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:26.747640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:29.819663   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:32.823816   73815 start.go:364] duration metric: took 4m30.025550701s to acquireMachinesLock for "embed-certs-291295"
	I0818 20:07:32.823869   73815 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:32.823875   73815 fix.go:54] fixHost starting: 
	I0818 20:07:32.824270   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:32.824306   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:32.839755   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0818 20:07:32.840171   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:32.840614   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:07:32.840632   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:32.840962   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:32.841160   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:32.841303   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:07:32.842786   73815 fix.go:112] recreateIfNeeded on embed-certs-291295: state=Stopped err=<nil>
	I0818 20:07:32.842814   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	W0818 20:07:32.842974   73815 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:32.844743   73815 out.go:177] * Restarting existing kvm2 VM for "embed-certs-291295" ...
	I0818 20:07:32.821304   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:32.821364   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821657   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:07:32.821683   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821904   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:07:32.823683   73711 machine.go:96] duration metric: took 4m37.430465042s to provisionDockerMachine
	I0818 20:07:32.823720   73711 fix.go:56] duration metric: took 4m37.451071449s for fixHost
	I0818 20:07:32.823727   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 4m37.451091077s
	W0818 20:07:32.823754   73711 start.go:714] error starting host: provision: host is not running
	W0818 20:07:32.823846   73711 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0818 20:07:32.823855   73711 start.go:729] Will try again in 5 seconds ...
	I0818 20:07:32.846149   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Start
	I0818 20:07:32.846317   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring networks are active...
	I0818 20:07:32.847049   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network default is active
	I0818 20:07:32.847478   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network mk-embed-certs-291295 is active
	I0818 20:07:32.847854   73815 main.go:141] libmachine: (embed-certs-291295) Getting domain xml...
	I0818 20:07:32.848748   73815 main.go:141] libmachine: (embed-certs-291295) Creating domain...
	I0818 20:07:34.053380   73815 main.go:141] libmachine: (embed-certs-291295) Waiting to get IP...
	I0818 20:07:34.054322   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.054765   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.054850   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.054751   75081 retry.go:31] will retry after 299.809444ms: waiting for machine to come up
	I0818 20:07:34.356537   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.356955   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.357014   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.356932   75081 retry.go:31] will retry after 366.714086ms: waiting for machine to come up
	I0818 20:07:34.725440   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.725885   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.725915   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.725839   75081 retry.go:31] will retry after 427.074526ms: waiting for machine to come up
	I0818 20:07:35.154258   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.154660   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.154682   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.154633   75081 retry.go:31] will retry after 565.117984ms: waiting for machine to come up
	I0818 20:07:35.721302   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.721729   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.721757   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.721686   75081 retry.go:31] will retry after 630.987814ms: waiting for machine to come up
	I0818 20:07:36.354566   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:36.354981   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:36.355016   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:36.354951   75081 retry.go:31] will retry after 697.865559ms: waiting for machine to come up
	I0818 20:07:37.054868   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.055232   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.055260   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.055188   75081 retry.go:31] will retry after 898.995052ms: waiting for machine to come up
	I0818 20:07:37.824187   73711 start.go:360] acquireMachinesLock for no-preload-944426: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:37.955672   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.956089   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.956115   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.956038   75081 retry.go:31] will retry after 1.482185836s: waiting for machine to come up
	I0818 20:07:39.440488   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:39.440838   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:39.440889   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:39.440794   75081 retry.go:31] will retry after 1.695604547s: waiting for machine to come up
	I0818 20:07:41.138708   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:41.139203   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:41.139231   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:41.139166   75081 retry.go:31] will retry after 1.806916927s: waiting for machine to come up
	I0818 20:07:42.947942   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:42.948344   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:42.948402   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:42.948319   75081 retry.go:31] will retry after 2.664923271s: waiting for machine to come up
	I0818 20:07:45.616102   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:45.616454   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:45.616482   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:45.616411   75081 retry.go:31] will retry after 3.460207847s: waiting for machine to come up
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:49.078185   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078646   73815 main.go:141] libmachine: (embed-certs-291295) Found IP for machine: 192.168.39.125
	I0818 20:07:49.078676   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has current primary IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078682   73815 main.go:141] libmachine: (embed-certs-291295) Reserving static IP address...
	I0818 20:07:49.079061   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.079091   73815 main.go:141] libmachine: (embed-certs-291295) Reserved static IP address: 192.168.39.125
	I0818 20:07:49.079112   73815 main.go:141] libmachine: (embed-certs-291295) DBG | skip adding static IP to network mk-embed-certs-291295 - found existing host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"}
	I0818 20:07:49.079132   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Getting to WaitForSSH function...
	I0818 20:07:49.079148   73815 main.go:141] libmachine: (embed-certs-291295) Waiting for SSH to be available...
	I0818 20:07:49.081287   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081592   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.081645   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081761   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH client type: external
	I0818 20:07:49.081788   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa (-rw-------)
	I0818 20:07:49.081823   73815 main.go:141] libmachine: (embed-certs-291295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:07:49.081841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | About to run SSH command:
	I0818 20:07:49.081854   73815 main.go:141] libmachine: (embed-certs-291295) DBG | exit 0
	I0818 20:07:49.207649   73815 main.go:141] libmachine: (embed-certs-291295) DBG | SSH cmd err, output: <nil>: 
	I0818 20:07:49.208007   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetConfigRaw
	I0818 20:07:49.208604   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.211088   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211436   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.211464   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211685   73815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:07:49.211906   73815 machine.go:93] provisionDockerMachine start ...
	I0818 20:07:49.211932   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:49.212156   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.214381   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214696   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.214722   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214838   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.215001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215139   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215264   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.215402   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.215637   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.215650   73815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:07:49.327972   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:07:49.328001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328234   73815 buildroot.go:166] provisioning hostname "embed-certs-291295"
	I0818 20:07:49.328286   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328495   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.331272   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331667   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.331695   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331795   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.331967   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332124   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332235   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.332387   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.332602   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.332620   73815 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-291295 && echo "embed-certs-291295" | sudo tee /etc/hostname
	I0818 20:07:49.457656   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-291295
	
	I0818 20:07:49.457692   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.460362   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460692   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.460724   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460821   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.461040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461269   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461419   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.461593   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.461791   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.461807   73815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-291295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-291295/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-291295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:07:49.580418   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:49.580448   73815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:07:49.580487   73815 buildroot.go:174] setting up certificates
	I0818 20:07:49.580501   73815 provision.go:84] configureAuth start
	I0818 20:07:49.580513   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.580787   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.583435   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.583801   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.583825   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.584097   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.586253   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586572   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.586606   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586700   73815 provision.go:143] copyHostCerts
	I0818 20:07:49.586764   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:07:49.586786   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:07:49.586863   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:07:49.586984   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:07:49.586994   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:07:49.587034   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:07:49.587134   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:07:49.587144   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:07:49.587182   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:07:49.587257   73815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-291295 san=[127.0.0.1 192.168.39.125 embed-certs-291295 localhost minikube]
	I0818 20:07:49.844689   73815 provision.go:177] copyRemoteCerts
	I0818 20:07:49.844745   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:07:49.844767   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.847172   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.847517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847700   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.847898   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.848060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.848210   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:49.933798   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:07:49.957958   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:07:49.981551   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:07:50.004238   73815 provision.go:87] duration metric: took 423.726052ms to configureAuth
	I0818 20:07:50.004263   73815 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:07:50.004431   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:07:50.004494   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.006759   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007031   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.007059   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007217   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.007437   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007603   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007729   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.007894   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.008058   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.008072   73815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:07:50.287001   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:07:50.287027   73815 machine.go:96] duration metric: took 1.075103653s to provisionDockerMachine
	I0818 20:07:50.287038   73815 start.go:293] postStartSetup for "embed-certs-291295" (driver="kvm2")
	I0818 20:07:50.287047   73815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:07:50.287067   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.287451   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:07:50.287478   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.290150   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290493   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.290515   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290727   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.290911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.291096   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.291233   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.379621   73815 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:07:50.388749   73815 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:07:50.388772   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:07:50.388844   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:07:50.388927   73815 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:07:50.389046   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:07:50.398957   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:50.422817   73815 start.go:296] duration metric: took 135.767247ms for postStartSetup
	I0818 20:07:50.422859   73815 fix.go:56] duration metric: took 17.598982329s for fixHost
	I0818 20:07:50.422886   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.425514   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.425899   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.425926   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.426113   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.426332   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426505   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426623   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.426798   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.427018   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.427033   73815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:07:50.540087   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011670.500173623
	
	I0818 20:07:50.540113   73815 fix.go:216] guest clock: 1724011670.500173623
	I0818 20:07:50.540122   73815 fix.go:229] Guest: 2024-08-18 20:07:50.500173623 +0000 UTC Remote: 2024-08-18 20:07:50.42286401 +0000 UTC m=+287.764343419 (delta=77.309613ms)
	I0818 20:07:50.540140   73815 fix.go:200] guest clock delta is within tolerance: 77.309613ms
	I0818 20:07:50.540145   73815 start.go:83] releasing machines lock for "embed-certs-291295", held for 17.716293127s
	I0818 20:07:50.540172   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.540462   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:50.543280   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543688   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.543721   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544386   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544639   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544698   73815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:07:50.544749   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.544889   73815 ssh_runner.go:195] Run: cat /version.json
	I0818 20:07:50.544913   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.547481   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547813   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.547841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547867   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547962   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548165   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548281   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.548307   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.548340   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548431   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548515   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.548576   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548701   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548874   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.628660   73815 ssh_runner.go:195] Run: systemctl --version
	I0818 20:07:50.653164   73815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:07:50.799158   73815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:07:50.805063   73815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:07:50.805134   73815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:07:50.820796   73815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:07:50.820822   73815 start.go:495] detecting cgroup driver to use...
	I0818 20:07:50.820901   73815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:07:50.837574   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:07:50.851913   73815 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:07:50.851981   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:07:50.865595   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:07:50.879240   73815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:07:50.990057   73815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:07:51.151540   73815 docker.go:233] disabling docker service ...
	I0818 20:07:51.151618   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:07:51.166231   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:07:51.180949   73815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:07:51.329174   73815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:07:51.460564   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:07:51.474929   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:07:51.494510   73815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:07:51.494573   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.507465   73815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:07:51.507533   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.519207   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.535742   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.551186   73815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:07:51.563233   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.574714   73815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.597948   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.609883   73815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:07:51.621040   73815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:07:51.621115   73815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:07:51.636305   73815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:07:51.646895   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:51.781890   73815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:07:51.927722   73815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:07:51.927799   73815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:07:51.932918   73815 start.go:563] Will wait 60s for crictl version
	I0818 20:07:51.933006   73815 ssh_runner.go:195] Run: which crictl
	I0818 20:07:51.936917   73815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:07:51.981063   73815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:07:51.981141   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.008566   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.041182   73815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:07:52.042348   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:52.045196   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045559   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:52.045588   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045764   73815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 20:07:52.050188   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:52.065105   73815 kubeadm.go:883] updating cluster {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:07:52.065244   73815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:07:52.065300   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:52.108608   73815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:07:52.108687   73815 ssh_runner.go:195] Run: which lz4
	I0818 20:07:52.112897   73815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:07:52.117388   73815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:07:52.117421   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:53.532527   73815 crio.go:462] duration metric: took 1.419668609s to copy over tarball
	I0818 20:07:53.532605   73815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:07:55.664780   73815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132141788s)
	I0818 20:07:55.664810   73815 crio.go:469] duration metric: took 2.132257968s to extract the tarball
	I0818 20:07:55.664820   73815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:07:55.702662   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:55.745782   73815 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:07:55.745801   73815 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:07:55.745809   73815 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0818 20:07:55.745921   73815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-291295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:07:55.745985   73815 ssh_runner.go:195] Run: crio config
	I0818 20:07:55.788458   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:07:55.788484   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:07:55.788503   73815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:07:55.788537   73815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-291295 NodeName:embed-certs-291295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:07:55.788723   73815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-291295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:07:55.788800   73815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:07:55.798787   73815 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:07:55.798860   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:07:55.808532   73815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0818 20:07:55.825731   73815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:07:55.842287   73815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0818 20:07:55.860058   73815 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0818 20:07:55.864007   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:55.876297   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:55.999076   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:07:56.015305   73815 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295 for IP: 192.168.39.125
	I0818 20:07:56.015325   73815 certs.go:194] generating shared ca certs ...
	I0818 20:07:56.015339   73815 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:07:56.015505   73815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:07:56.015548   73815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:07:56.015557   73815 certs.go:256] generating profile certs ...
	I0818 20:07:56.015633   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/client.key
	I0818 20:07:56.015689   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key.a8bddcfe
	I0818 20:07:56.015732   73815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key
	I0818 20:07:56.015846   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:07:56.015885   73815 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:07:56.015898   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:07:56.015953   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:07:56.015979   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:07:56.015999   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:07:56.016036   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:56.016660   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:07:56.044323   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:07:56.079231   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:07:56.111738   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:07:56.134817   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0818 20:07:56.160819   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:07:56.185806   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:07:56.210116   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:07:56.234185   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:07:56.256896   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:07:56.279505   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:07:56.302178   73815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:07:56.318931   73815 ssh_runner.go:195] Run: openssl version
	I0818 20:07:56.324865   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:07:56.336272   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340825   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340872   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.346515   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:07:56.357471   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:07:56.368211   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372600   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372662   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.378152   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:07:56.388868   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:07:56.399297   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403628   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403663   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.409041   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:07:56.419342   73815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:07:56.423757   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:07:56.429341   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:07:56.435012   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:07:56.440752   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:07:56.446305   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:07:56.452219   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:07:56.458004   73815 kubeadm.go:392] StartCluster: {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:07:56.458133   73815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:07:56.458181   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.495200   73815 cri.go:89] found id: ""
	I0818 20:07:56.495281   73815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:07:56.505834   73815 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:07:56.505854   73815 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:07:56.505903   73815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:07:56.516025   73815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:07:56.516962   73815 kubeconfig.go:125] found "embed-certs-291295" server: "https://192.168.39.125:8443"
	I0818 20:07:56.518789   73815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:07:56.528513   73815 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0818 20:07:56.528541   73815 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:07:56.528556   73815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:07:56.528612   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.568091   73815 cri.go:89] found id: ""
	I0818 20:07:56.568161   73815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:07:56.584012   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:07:56.593697   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:07:56.593712   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:07:56.593746   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:07:56.603071   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:07:56.603112   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:07:56.612422   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:07:56.621194   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:07:56.621243   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:07:56.630252   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.640086   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:07:56.640138   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.649323   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:07:56.658055   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:07:56.658110   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:07:56.667134   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:07:56.676460   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.783806   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.515850   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:07:57.710065   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.780213   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.854365   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:07:57.854458   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.355246   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.854602   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.355211   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.854991   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.354593   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.368818   73815 api_server.go:72] duration metric: took 2.514473789s to wait for apiserver process to appear ...
	I0818 20:08:00.368844   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:00.368866   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.832413   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:02.832449   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:02.832466   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.924768   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.924804   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:02.924820   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.929839   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.929869   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.369350   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.373766   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.373796   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.869333   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.874889   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.874919   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:04.369187   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:04.374739   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:08:04.383736   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:04.383764   73815 api_server.go:131] duration metric: took 4.014913233s to wait for apiserver health ...
	I0818 20:08:04.383773   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:08:04.383779   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:04.385486   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:04.386800   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:04.403476   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:04.422354   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:04.435181   73815 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:04.435222   73815 system_pods.go:61] "coredns-6f6b679f8f-wvd9k" [02369649-1565-437d-8b19-a67adfe13d45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:04.435237   73815 system_pods.go:61] "etcd-embed-certs-291295" [1e9f0b7d-bb65-4867-821e-b9af34338b3e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:04.435246   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [bb884a00-e058-4348-bc6a-427c64f4c68d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:04.435261   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [3a359998-cdb6-46ef-a018-e03e70cb33e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:04.435269   73815 system_pods.go:61] "kube-proxy-5fjm2" [bb15b1d9-8221-473a-b0c7-8c65b3b18bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 20:08:04.435276   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [4ed7725a-b0e6-4bc0-b0bd-913eb15fd4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:04.435287   73815 system_pods.go:61] "metrics-server-6867b74b74-g2kt7" [c23cc238-51f0-402c-a0c1-4aecc020d845] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:04.435294   73815 system_pods.go:61] "storage-provisioner" [2dcad3a1-15f0-41b9-8398-5a6e2d8763b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 20:08:04.435303   73815 system_pods.go:74] duration metric: took 12.928394ms to wait for pod list to return data ...
	I0818 20:08:04.435314   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:04.439127   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:04.439150   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:04.439161   73815 node_conditions.go:105] duration metric: took 3.84281ms to run NodePressure ...
	I0818 20:08:04.439176   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:04.720705   73815 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726814   73815 kubeadm.go:739] kubelet initialised
	I0818 20:08:04.726835   73815 kubeadm.go:740] duration metric: took 6.104356ms waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726843   73815 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:04.736000   73815 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.741473   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741509   73815 pod_ready.go:82] duration metric: took 5.472852ms for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.741523   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741534   73815 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.749841   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749872   73815 pod_ready.go:82] duration metric: took 8.326743ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.749883   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749891   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.756947   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.756997   73815 pod_ready.go:82] duration metric: took 7.079861ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.757011   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.757019   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.825829   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825865   73815 pod_ready.go:82] duration metric: took 68.834734ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.825878   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825888   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225761   73815 pod_ready.go:93] pod "kube-proxy-5fjm2" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:05.225786   73815 pod_ready.go:82] duration metric: took 399.888138ms for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225796   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:07.232250   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.744305   74485 start.go:364] duration metric: took 3m27.85511004s to acquireMachinesLock for "default-k8s-diff-port-852598"
	I0818 20:08:10.744365   74485 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:10.744384   74485 fix.go:54] fixHost starting: 
	I0818 20:08:10.744751   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:10.744791   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:10.764317   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0818 20:08:10.764799   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:10.765323   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:08:10.765349   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:10.765723   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:10.765929   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:10.766110   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:08:10.767735   74485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-852598: state=Stopped err=<nil>
	I0818 20:08:10.767763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	W0818 20:08:10.767931   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:10.770197   74485 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-852598" ...
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:09.234086   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:11.732954   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.771461   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Start
	I0818 20:08:10.771638   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring networks are active...
	I0818 20:08:10.772332   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network default is active
	I0818 20:08:10.772645   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network mk-default-k8s-diff-port-852598 is active
	I0818 20:08:10.773119   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Getting domain xml...
	I0818 20:08:10.773840   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Creating domain...
	I0818 20:08:12.058765   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting to get IP...
	I0818 20:08:12.059745   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060236   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.060152   75353 retry.go:31] will retry after 227.793826ms: waiting for machine to come up
	I0818 20:08:12.289622   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290038   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290061   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.290013   75353 retry.go:31] will retry after 288.501286ms: waiting for machine to come up
	I0818 20:08:12.580672   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581158   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.581120   75353 retry.go:31] will retry after 460.489481ms: waiting for machine to come up
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:13.736819   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:14.732761   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:14.732783   73815 pod_ready.go:82] duration metric: took 9.506980075s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:14.732792   73815 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:16.739855   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:13.042839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043444   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043475   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.043413   75353 retry.go:31] will retry after 542.076458ms: waiting for machine to come up
	I0818 20:08:13.586675   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587296   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587326   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.587216   75353 retry.go:31] will retry after 553.588704ms: waiting for machine to come up
	I0818 20:08:14.142076   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142714   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142737   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.142616   75353 retry.go:31] will retry after 852.179264ms: waiting for machine to come up
	I0818 20:08:14.996732   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997226   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997258   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.997175   75353 retry.go:31] will retry after 732.180291ms: waiting for machine to come up
	I0818 20:08:15.731247   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731741   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731771   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:15.731699   75353 retry.go:31] will retry after 1.456328641s: waiting for machine to come up
	I0818 20:08:17.189586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190071   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:17.189997   75353 retry.go:31] will retry after 1.632315907s: waiting for machine to come up
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:18.740885   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:21.239526   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:18.824458   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824827   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824859   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:18.824772   75353 retry.go:31] will retry after 2.077122736s: waiting for machine to come up
	I0818 20:08:20.903734   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904176   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904203   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:20.904139   75353 retry.go:31] will retry after 1.975638775s: waiting for machine to come up
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.239765   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:25.739127   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:22.882020   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882511   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:22.882450   75353 retry.go:31] will retry after 3.362090127s: waiting for machine to come up
	I0818 20:08:26.246148   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246523   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246547   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:26.246479   75353 retry.go:31] will retry after 3.188423251s: waiting for machine to come up
	I0818 20:08:30.732227   73711 start.go:364] duration metric: took 52.90798246s to acquireMachinesLock for "no-preload-944426"
	I0818 20:08:30.732291   73711 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:30.732302   73711 fix.go:54] fixHost starting: 
	I0818 20:08:30.732702   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:30.732738   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:30.749873   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0818 20:08:30.750371   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:30.750922   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:08:30.750951   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:30.751323   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:30.751547   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:30.751748   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:08:30.753437   73711 fix.go:112] recreateIfNeeded on no-preload-944426: state=Stopped err=<nil>
	I0818 20:08:30.753460   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	W0818 20:08:30.753623   73711 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:30.756026   73711 out.go:177] * Restarting existing kvm2 VM for "no-preload-944426" ...
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.438706   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439209   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Found IP for machine: 192.168.72.111
	I0818 20:08:29.439225   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserving static IP address...
	I0818 20:08:29.439241   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has current primary IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439712   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.439740   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | skip adding static IP to network mk-default-k8s-diff-port-852598 - found existing host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"}
	I0818 20:08:29.439754   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserved static IP address: 192.168.72.111
	I0818 20:08:29.439769   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for SSH to be available...
	I0818 20:08:29.439786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Getting to WaitForSSH function...
	I0818 20:08:29.442039   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442351   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.442378   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442515   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH client type: external
	I0818 20:08:29.442545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa (-rw-------)
	I0818 20:08:29.442569   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:29.442580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | About to run SSH command:
	I0818 20:08:29.442592   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | exit 0
	I0818 20:08:29.567586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:29.567935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetConfigRaw
	I0818 20:08:29.568553   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.570763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571150   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.571183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571367   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:08:29.571585   74485 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:29.571608   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:29.571839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.574102   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574560   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.574598   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574753   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.574920   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575219   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.575421   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.575610   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.575623   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:29.683677   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:29.683705   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.683980   74485 buildroot.go:166] provisioning hostname "default-k8s-diff-port-852598"
	I0818 20:08:29.684010   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.684210   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.687062   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687490   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.687518   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687656   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.687817   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.687954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.688105   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.688270   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.688444   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.688457   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-852598 && echo "default-k8s-diff-port-852598" | sudo tee /etc/hostname
	I0818 20:08:29.810790   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-852598
	
	I0818 20:08:29.810821   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.813448   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.813868   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.814159   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814322   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814457   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.814613   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.814821   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.814847   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-852598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-852598/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-852598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:29.934730   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:29.934762   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:29.934818   74485 buildroot.go:174] setting up certificates
	I0818 20:08:29.934834   74485 provision.go:84] configureAuth start
	I0818 20:08:29.934848   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.935133   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.938004   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938365   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.938385   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938612   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.940910   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941267   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.941298   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941376   74485 provision.go:143] copyHostCerts
	I0818 20:08:29.941429   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:29.941446   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:29.941498   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:29.941583   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:29.941591   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:29.941609   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:29.941657   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:29.941664   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:29.941683   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:29.941726   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-852598 san=[127.0.0.1 192.168.72.111 default-k8s-diff-port-852598 localhost minikube]
	I0818 20:08:30.047223   74485 provision.go:177] copyRemoteCerts
	I0818 20:08:30.047284   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:30.047310   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.049891   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050165   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.050195   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050394   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.050580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.050750   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.050910   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.133873   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:30.158887   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0818 20:08:30.183930   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 20:08:30.208851   74485 provision.go:87] duration metric: took 274.002401ms to configureAuth
	I0818 20:08:30.208888   74485 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:30.209075   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:30.209144   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.211913   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212274   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.212305   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212521   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.212718   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.212897   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.213060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.213313   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.213531   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.213564   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:30.490496   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:30.490524   74485 machine.go:96] duration metric: took 918.924484ms to provisionDockerMachine
	I0818 20:08:30.490541   74485 start.go:293] postStartSetup for "default-k8s-diff-port-852598" (driver="kvm2")
	I0818 20:08:30.490555   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:30.490576   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.490879   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:30.490904   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.493538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.493863   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.493894   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.494015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.494211   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.494367   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.494513   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.582020   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:30.586488   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:30.586510   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:30.586568   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:30.586656   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:30.586743   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:30.595907   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:30.619808   74485 start.go:296] duration metric: took 129.254668ms for postStartSetup
	I0818 20:08:30.619842   74485 fix.go:56] duration metric: took 19.875457987s for fixHost
	I0818 20:08:30.619861   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.622487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622802   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.622836   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.623181   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623338   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623489   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.623663   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.623819   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.623829   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:30.732011   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011710.692571104
	
	I0818 20:08:30.732033   74485 fix.go:216] guest clock: 1724011710.692571104
	I0818 20:08:30.732040   74485 fix.go:229] Guest: 2024-08-18 20:08:30.692571104 +0000 UTC Remote: 2024-08-18 20:08:30.619845545 +0000 UTC m=+227.865652589 (delta=72.725559ms)
	I0818 20:08:30.732088   74485 fix.go:200] guest clock delta is within tolerance: 72.725559ms
	I0818 20:08:30.732098   74485 start.go:83] releasing machines lock for "default-k8s-diff-port-852598", held for 19.987759602s
	I0818 20:08:30.732126   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.732380   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:30.735249   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735696   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.735724   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735987   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736665   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736886   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736961   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:30.737002   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.737212   74485 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:30.737240   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.740016   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740246   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740447   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740470   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740646   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740650   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740739   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740949   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740956   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741415   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741427   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741608   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.741699   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.821128   74485 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:30.848919   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:30.997885   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:31.004578   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:31.004656   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:31.023770   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:31.023801   74485 start.go:495] detecting cgroup driver to use...
	I0818 20:08:31.023873   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:31.040507   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:31.054848   74485 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:31.054901   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:31.069584   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:31.089532   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:31.214560   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:31.394507   74485 docker.go:233] disabling docker service ...
	I0818 20:08:31.394571   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:31.411295   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:31.427312   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:31.547148   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:31.669942   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:31.686214   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:31.711412   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:31.711474   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.723281   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:31.723346   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.735488   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.748029   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.762456   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:31.779045   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.793816   74485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.816892   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.829236   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:31.842943   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:31.843000   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:31.858422   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:31.870179   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:32.003783   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:32.160300   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:32.160368   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:32.165424   74485 start.go:563] Will wait 60s for crictl version
	I0818 20:08:32.165472   74485 ssh_runner.go:195] Run: which crictl
	I0818 20:08:32.169268   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:32.211667   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:32.211758   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.242366   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.272343   74485 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:27.739698   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:30.239242   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.240089   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.273652   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:32.277017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277362   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:32.277395   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277654   74485 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:32.282225   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:32.306870   74485 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:32.306980   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:32.307040   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:32.350393   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:32.350473   74485 ssh_runner.go:195] Run: which lz4
	I0818 20:08:32.355129   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:32.359816   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:32.359839   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:08:30.757329   73711 main.go:141] libmachine: (no-preload-944426) Calling .Start
	I0818 20:08:30.757514   73711 main.go:141] libmachine: (no-preload-944426) Ensuring networks are active...
	I0818 20:08:30.758286   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network default is active
	I0818 20:08:30.758667   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network mk-no-preload-944426 is active
	I0818 20:08:30.759084   73711 main.go:141] libmachine: (no-preload-944426) Getting domain xml...
	I0818 20:08:30.759889   73711 main.go:141] libmachine: (no-preload-944426) Creating domain...
	I0818 20:08:32.064235   73711 main.go:141] libmachine: (no-preload-944426) Waiting to get IP...
	I0818 20:08:32.065149   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.065617   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.065693   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.065614   75550 retry.go:31] will retry after 223.046315ms: waiting for machine to come up
	I0818 20:08:32.290000   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.290486   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.290517   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.290460   75550 retry.go:31] will retry after 359.595476ms: waiting for machine to come up
	I0818 20:08:32.652293   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.652922   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.652953   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.652891   75550 retry.go:31] will retry after 355.131428ms: waiting for machine to come up
	I0818 20:08:33.009174   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.009664   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.009692   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.009620   75550 retry.go:31] will retry after 433.765107ms: waiting for machine to come up
	I0818 20:08:33.445297   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.446028   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.446057   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.446005   75550 retry.go:31] will retry after 547.853366ms: waiting for machine to come up
	I0818 20:08:33.995808   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.996537   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.996569   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.996500   75550 retry.go:31] will retry after 830.882652ms: waiting for machine to come up
	I0818 20:08:34.828636   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:34.829139   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:34.829169   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:34.829088   75550 retry.go:31] will retry after 1.034176215s: waiting for machine to come up
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.240826   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:36.740440   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:33.831827   74485 crio.go:462] duration metric: took 1.476738272s to copy over tarball
	I0818 20:08:33.831892   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:36.080107   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.24818669s)
	I0818 20:08:36.080141   74485 crio.go:469] duration metric: took 2.248285769s to extract the tarball
	I0818 20:08:36.080159   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:36.120912   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:36.170431   74485 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:08:36.170455   74485 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:08:36.170463   74485 kubeadm.go:934] updating node { 192.168.72.111 8444 v1.31.0 crio true true} ...
	I0818 20:08:36.170563   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-852598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:36.170628   74485 ssh_runner.go:195] Run: crio config
	I0818 20:08:36.215464   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:36.215491   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:36.215504   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:36.215528   74485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-852598 NodeName:default-k8s-diff-port-852598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:08:36.215652   74485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-852598"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:36.215718   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:08:36.227163   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:36.227254   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:36.237577   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0818 20:08:36.254898   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:36.273530   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0818 20:08:36.290824   74485 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:36.294542   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:36.306822   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:36.443673   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:36.461205   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598 for IP: 192.168.72.111
	I0818 20:08:36.461232   74485 certs.go:194] generating shared ca certs ...
	I0818 20:08:36.461252   74485 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:36.461420   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:36.461492   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:36.461505   74485 certs.go:256] generating profile certs ...
	I0818 20:08:36.461621   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/client.key
	I0818 20:08:36.461717   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key.44a0f5ad
	I0818 20:08:36.461783   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key
	I0818 20:08:36.461930   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:36.461983   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:36.461998   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:36.462026   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:36.462077   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:36.462112   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:36.462167   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:36.462916   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:36.512610   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:36.558616   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:36.595755   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:36.638264   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 20:08:36.669336   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:08:36.692480   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:36.717235   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:08:36.742220   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:36.765505   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:36.789279   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:36.813777   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:36.831256   74485 ssh_runner.go:195] Run: openssl version
	I0818 20:08:36.837184   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:36.848123   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853030   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853089   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.859016   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:36.871084   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:36.882581   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.888943   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.889008   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.896841   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:36.911762   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:36.923029   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.927982   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.928039   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.934165   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:36.946794   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:36.951686   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:36.957905   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:36.964071   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:36.970369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:36.976369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:36.982386   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:36.988286   74485 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:36.988382   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:36.988433   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.036383   74485 cri.go:89] found id: ""
	I0818 20:08:37.036472   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:37.047135   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:37.047159   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:37.047204   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:37.058133   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:37.059236   74485 kubeconfig.go:125] found "default-k8s-diff-port-852598" server: "https://192.168.72.111:8444"
	I0818 20:08:37.061368   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:37.072922   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0818 20:08:37.072961   74485 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:37.072975   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:37.073035   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.120622   74485 cri.go:89] found id: ""
	I0818 20:08:37.120713   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:37.138564   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:37.149091   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:37.149114   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:37.149167   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:08:37.160298   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:37.160364   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:37.170717   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:08:37.180261   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:37.180337   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:37.190466   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.200331   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:37.200407   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.210729   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:08:37.220302   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:37.220379   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:37.230616   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:37.241303   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:37.365964   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:35.865644   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:35.866148   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:35.866176   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:35.866094   75550 retry.go:31] will retry after 1.30047863s: waiting for machine to come up
	I0818 20:08:37.168446   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:37.168947   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:37.168985   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:37.168886   75550 retry.go:31] will retry after 1.143148547s: waiting for machine to come up
	I0818 20:08:38.314142   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:38.314622   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:38.314645   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:38.314568   75550 retry.go:31] will retry after 2.106630797s: waiting for machine to come up
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.240817   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:41.741780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:38.322305   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.523945   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.627637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.794218   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:38.794298   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.295075   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.795095   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.810749   74485 api_server.go:72] duration metric: took 1.016560665s to wait for apiserver process to appear ...
	I0818 20:08:39.810778   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:39.810802   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:39.811324   74485 api_server.go:269] stopped: https://192.168.72.111:8444/healthz: Get "https://192.168.72.111:8444/healthz": dial tcp 192.168.72.111:8444: connect: connection refused
	I0818 20:08:40.311081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.309160   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:42.309190   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:42.309206   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.364083   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.364123   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:42.364148   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.370890   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.370918   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:40.423364   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:40.423886   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:40.423909   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:40.423851   75550 retry.go:31] will retry after 2.350918177s: waiting for machine to come up
	I0818 20:08:42.776801   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:42.777407   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:42.777440   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:42.777361   75550 retry.go:31] will retry after 3.529824243s: waiting for machine to come up
	I0818 20:08:42.815322   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.823702   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.823738   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.311540   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.317503   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.317537   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.810955   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.816976   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.817005   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.311718   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.316009   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.316038   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.811634   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.816069   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.816095   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.311732   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.317099   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:45.317122   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.811063   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.815319   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:08:45.821699   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:45.821728   74485 api_server.go:131] duration metric: took 6.010942001s to wait for apiserver health ...
	I0818 20:08:45.821739   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:45.821774   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:45.823968   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.239827   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:46.240539   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:45.825235   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:45.836398   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:45.854746   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:45.866305   74485 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:45.866335   74485 system_pods.go:61] "coredns-6f6b679f8f-zfdn9" [8ed412a0-912d-4619-a2d8-2378f921037b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:45.866344   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [efa18356-f8dd-4fe4-acc6-59f859e7becf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:45.866351   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [b92f2056-c5b6-4a2f-8519-a83b2350866f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:45.866359   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [7eb6a474-891d-442e-bd85-4ca766312f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:45.866365   74485 system_pods.go:61] "kube-proxy-h8bpj" [472e231d-df71-44d6-8873-23d7e43d43d2] Running
	I0818 20:08:45.866375   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [43dccb14-0125-4d48-9537-8a87c865b586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:45.866381   74485 system_pods.go:61] "metrics-server-6867b74b74-brqj6" [de1c0894-2b42-4728-bf63-bea36c5aa0d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:45.866387   74485 system_pods.go:61] "storage-provisioner" [41499d9e-d3cf-4dbc-9464-998a1f2c6186] Running
	I0818 20:08:45.866395   74485 system_pods.go:74] duration metric: took 11.62616ms to wait for pod list to return data ...
	I0818 20:08:45.866411   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:45.870540   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:45.870564   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:45.870578   74485 node_conditions.go:105] duration metric: took 4.15805ms to run NodePressure ...
	I0818 20:08:45.870597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:46.138555   74485 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142738   74485 kubeadm.go:739] kubelet initialised
	I0818 20:08:46.142758   74485 kubeadm.go:740] duration metric: took 4.173219ms waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142765   74485 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:46.147199   74485 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.151726   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151751   74485 pod_ready.go:82] duration metric: took 4.528706ms for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.151762   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151770   74485 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.155962   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.155984   74485 pod_ready.go:82] duration metric: took 4.203038ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.155996   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.156002   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.159739   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159759   74485 pod_ready.go:82] duration metric: took 3.749616ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.159769   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159777   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.309056   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:46.309441   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:46.309470   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:46.309395   75550 retry.go:31] will retry after 3.741295193s: waiting for machine to come up
	I0818 20:08:50.052617   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053049   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has current primary IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053070   73711 main.go:141] libmachine: (no-preload-944426) Found IP for machine: 192.168.61.228
	I0818 20:08:50.053083   73711 main.go:141] libmachine: (no-preload-944426) Reserving static IP address...
	I0818 20:08:50.053446   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.053467   73711 main.go:141] libmachine: (no-preload-944426) Reserved static IP address: 192.168.61.228
	I0818 20:08:50.053484   73711 main.go:141] libmachine: (no-preload-944426) DBG | skip adding static IP to network mk-no-preload-944426 - found existing host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"}
	I0818 20:08:50.053498   73711 main.go:141] libmachine: (no-preload-944426) DBG | Getting to WaitForSSH function...
	I0818 20:08:50.053510   73711 main.go:141] libmachine: (no-preload-944426) Waiting for SSH to be available...
	I0818 20:08:50.055459   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055790   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.055822   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055911   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH client type: external
	I0818 20:08:50.055939   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa (-rw-------)
	I0818 20:08:50.055971   73711 main.go:141] libmachine: (no-preload-944426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:50.055986   73711 main.go:141] libmachine: (no-preload-944426) DBG | About to run SSH command:
	I0818 20:08:50.055998   73711 main.go:141] libmachine: (no-preload-944426) DBG | exit 0
	I0818 20:08:50.175717   73711 main.go:141] libmachine: (no-preload-944426) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:50.176077   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetConfigRaw
	I0818 20:08:50.176705   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.179072   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179455   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.179486   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179712   73711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:08:50.179900   73711 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:50.179923   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:50.180128   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.182300   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182679   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.182707   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182822   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.183009   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183138   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183292   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.183455   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.183613   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.183623   73711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.739015   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.741282   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:48.165270   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.166500   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:52.667585   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.284037   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:50.284069   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284354   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:08:50.284383   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284503   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.287412   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287774   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.287814   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287965   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.288164   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288352   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288509   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.288669   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.288869   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.288889   73711 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944426 && echo "no-preload-944426" | sudo tee /etc/hostname
	I0818 20:08:50.407844   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944426
	
	I0818 20:08:50.407877   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.410740   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411115   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.411156   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411402   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.411612   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411760   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411869   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.412073   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.412277   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.412299   73711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944426/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:50.521359   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:50.521388   73711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:50.521456   73711 buildroot.go:174] setting up certificates
	I0818 20:08:50.521467   73711 provision.go:84] configureAuth start
	I0818 20:08:50.521481   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.521824   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.524572   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.524975   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.525002   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.525211   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.527350   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527669   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.527697   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527790   73711 provision.go:143] copyHostCerts
	I0818 20:08:50.527856   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:50.527872   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:50.527924   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:50.528038   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:50.528047   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:50.528065   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:50.528119   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:50.528126   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:50.528143   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:50.528192   73711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.no-preload-944426 san=[127.0.0.1 192.168.61.228 localhost minikube no-preload-944426]
	I0818 20:08:50.740892   73711 provision.go:177] copyRemoteCerts
	I0818 20:08:50.740964   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:50.740991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.743676   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744029   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.744059   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744260   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.744494   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.744681   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.744848   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:50.826364   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:50.858459   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:08:50.890910   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:50.918703   73711 provision.go:87] duration metric: took 397.222917ms to configureAuth
	I0818 20:08:50.918730   73711 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:50.918947   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:50.919029   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.922219   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922549   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.922573   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922762   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.922991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923166   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923300   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.923475   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.923683   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.923700   73711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:51.193561   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:51.193588   73711 machine.go:96] duration metric: took 1.013672792s to provisionDockerMachine
	I0818 20:08:51.193603   73711 start.go:293] postStartSetup for "no-preload-944426" (driver="kvm2")
	I0818 20:08:51.193616   73711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:51.193660   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.194032   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:51.194060   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.196422   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196712   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.196747   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196900   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.197046   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.197157   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.197325   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.279007   73711 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:51.283324   73711 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:51.283344   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:51.283424   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:51.283524   73711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:51.283641   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:51.293489   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:51.317415   73711 start.go:296] duration metric: took 123.797891ms for postStartSetup
	I0818 20:08:51.317455   73711 fix.go:56] duration metric: took 20.58515233s for fixHost
	I0818 20:08:51.317479   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.320161   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320452   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.320481   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320667   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.320853   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321027   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321171   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.321322   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:51.321505   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:51.321517   73711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:51.420193   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011731.395088538
	
	I0818 20:08:51.420216   73711 fix.go:216] guest clock: 1724011731.395088538
	I0818 20:08:51.420223   73711 fix.go:229] Guest: 2024-08-18 20:08:51.395088538 +0000 UTC Remote: 2024-08-18 20:08:51.317459873 +0000 UTC m=+356.082724848 (delta=77.628665ms)
	I0818 20:08:51.420240   73711 fix.go:200] guest clock delta is within tolerance: 77.628665ms
	I0818 20:08:51.420256   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 20.687989837s
	I0818 20:08:51.420273   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.420534   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:51.423567   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.423861   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.423888   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.424052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424528   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424690   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424777   73711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:51.424825   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.424916   73711 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:51.424945   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.427482   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427714   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427786   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.427813   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427962   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428080   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.428109   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.428146   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428283   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428342   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428441   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428532   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.428600   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428707   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.528038   73711 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:51.534231   73711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:51.683823   73711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:51.690823   73711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:51.690901   73711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:51.707356   73711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:51.707389   73711 start.go:495] detecting cgroup driver to use...
	I0818 20:08:51.707459   73711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:51.723884   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:51.737661   73711 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:51.737715   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:51.751187   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:51.764367   73711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:51.881664   73711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:52.022183   73711 docker.go:233] disabling docker service ...
	I0818 20:08:52.022250   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:52.037108   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:52.050404   73711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:52.190167   73711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:52.325569   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:52.339546   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:52.358427   73711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:52.358487   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.369570   73711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:52.369629   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.382786   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.396845   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.407797   73711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:52.418649   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.428822   73711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.445799   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.455730   73711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:52.464898   73711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:52.464951   73711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:52.477249   73711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:52.487204   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:52.608922   73711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:52.753849   73711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:52.753918   73711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:52.759116   73711 start.go:563] Will wait 60s for crictl version
	I0818 20:08:52.759175   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:52.763674   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:52.806016   73711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:52.806106   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.833670   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.864310   73711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:52.865447   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:52.868265   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868667   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:52.868699   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868900   73711 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:52.873656   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:52.887328   73711 kubeadm.go:883] updating cluster {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:52.887505   73711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:52.887553   73711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:52.923999   73711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:52.924025   73711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:52.924090   73711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.924097   73711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.924113   73711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.924147   73711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.924216   73711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.924239   73711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:52.924305   73711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.924390   73711 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.925984   73711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.926002   73711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.925994   73711 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 20:08:52.926011   73711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.926053   73711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.926291   73711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.117679   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157566   73711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0818 20:08:53.157608   73711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157655   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.158464   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.161938   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217317   73711 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0818 20:08:53.217374   73711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.217419   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217427   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.229954   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0818 20:08:53.253154   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.253209   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.261450   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.269598   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.270354   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.270401   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.421994   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 20:08:53.422048   73711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0818 20:08:53.422139   73711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.422182   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.422195   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.422052   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.446061   73711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0818 20:08:53.446101   73711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.446100   73711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0818 20:08:53.446114   73711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0818 20:08:53.446158   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446201   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.446161   73711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.446130   73711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.446250   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446280   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.474921   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.474936   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0818 20:08:53.474953   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474995   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474999   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.505782   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.505904   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.505934   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.799739   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.740857   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.167350   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.168652   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.666744   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.666779   74485 pod_ready.go:82] duration metric: took 11.506987195s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.666802   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671280   74485 pod_ready.go:93] pod "kube-proxy-h8bpj" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.671302   74485 pod_ready.go:82] duration metric: took 4.49242ms for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671311   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675745   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.675765   74485 pod_ready.go:82] duration metric: took 4.446707ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675779   74485 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:55.497054   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.022032642s)
	I0818 20:08:55.497090   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0818 20:08:55.497116   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.022155942s)
	I0818 20:08:55.497157   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.022131358s)
	I0818 20:08:55.497168   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0818 20:08:55.497227   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.497273   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.497313   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.991355489s)
	I0818 20:08:55.497274   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.991406662s)
	I0818 20:08:55.497362   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.497369   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.497393   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (1.991466215s)
	I0818 20:08:55.497409   73711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.697646009s)
	I0818 20:08:55.497439   73711 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0818 20:08:55.497455   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:55.497468   73711 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.497504   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:55.590490   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.608567   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.608583   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.608658   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 20:08:55.608707   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.608728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0818 20:08:55.608741   73711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.608756   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:55.608768   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.660747   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 20:08:55.660856   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:08:55.701347   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 20:08:55.701376   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.701433   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:08:55.717056   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 20:08:55.717159   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:08:59.680640   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.071854332s)
	I0818 20:08:59.680673   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0818 20:08:59.680700   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (4.071919945s)
	I0818 20:08:59.680728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0818 20:08:59.680739   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680755   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (4.019877135s)
	I0818 20:08:59.680781   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0818 20:08:59.680792   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.97939667s)
	I0818 20:08:59.680802   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680818   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (3.979373996s)
	I0818 20:08:59.680833   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0818 20:08:59.680847   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:59.680876   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (3.96370085s)
	I0818 20:08:59.680895   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.241463   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:00.241492   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:59.683057   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:02.183113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:01.753708   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.072881673s)
	I0818 20:09:01.753739   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.072859667s)
	I0818 20:09:01.753786   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 20:09:01.753747   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0818 20:09:01.753866   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:01.753870   73711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:01.753922   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:03.515107   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.761161853s)
	I0818 20:09:03.515136   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0818 20:09:03.515142   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.761255334s)
	I0818 20:09:03.515162   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:03.515170   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0818 20:09:03.515223   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.741235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.910002   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.239901   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.682962   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.183678   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:05.463531   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.948279133s)
	I0818 20:09:05.463559   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0818 20:09:05.463585   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:05.463629   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:07.525332   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.061676855s)
	I0818 20:09:07.525365   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0818 20:09:07.525401   73711 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:07.525473   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:08.178855   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 20:09:08.178894   73711 cache_images.go:123] Successfully loaded all cached images
	I0818 20:09:08.178900   73711 cache_images.go:92] duration metric: took 15.254860831s to LoadCachedImages
	I0818 20:09:08.178915   73711 kubeadm.go:934] updating node { 192.168.61.228 8443 v1.31.0 crio true true} ...
	I0818 20:09:08.179070   73711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:09:08.179163   73711 ssh_runner.go:195] Run: crio config
	I0818 20:09:08.229392   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:08.229418   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:08.229429   73711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:09:08.229453   73711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.228 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944426 NodeName:no-preload-944426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:09:08.229598   73711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944426"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:09:08.229657   73711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:09:08.240023   73711 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:09:08.240121   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:09:08.249808   73711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 20:09:08.266663   73711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:09:08.284042   73711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0818 20:09:08.302210   73711 ssh_runner.go:195] Run: grep 192.168.61.228	control-plane.minikube.internal$ /etc/hosts
	I0818 20:09:08.306321   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:09:08.318674   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:08.437701   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:08.462861   73711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426 for IP: 192.168.61.228
	I0818 20:09:08.462889   73711 certs.go:194] generating shared ca certs ...
	I0818 20:09:08.462909   73711 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:08.463099   73711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:09:08.463166   73711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:09:08.463178   73711 certs.go:256] generating profile certs ...
	I0818 20:09:08.463297   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/client.key
	I0818 20:09:08.463400   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key.ec9e396f
	I0818 20:09:08.463459   73711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key
	I0818 20:09:08.463622   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:09:08.463663   73711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:09:08.463676   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:09:08.463718   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:09:08.463748   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:09:08.463780   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:09:08.463827   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:09:08.464500   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:09:08.497860   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:09:08.550536   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:09:08.593972   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:09:08.625691   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 20:09:08.652285   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:09:08.676175   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:09:08.703870   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:09:08.729102   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:09:08.758017   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:09:08.783528   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:09:08.808211   73711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:09:08.825465   73711 ssh_runner.go:195] Run: openssl version
	I0818 20:09:08.831856   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:09:08.843336   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847774   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847824   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.854110   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:09:08.865279   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:09:08.876107   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880723   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880786   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.886526   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:09:08.898139   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:09:08.909258   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.913957   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.914015   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.919888   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:09:08.933118   73711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:09:08.937979   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:09:08.944427   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:09:08.950686   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:09:08.956949   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:09:08.963201   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:09:08.969284   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:09:08.975411   73711 kubeadm.go:392] StartCluster: {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:09:08.975501   73711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:09:08.975543   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.019794   73711 cri.go:89] found id: ""
	I0818 20:09:09.019859   73711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:09:09.030614   73711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:09:09.030635   73711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:09:09.030689   73711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:09:09.041513   73711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:09:09.042532   73711 kubeconfig.go:125] found "no-preload-944426" server: "https://192.168.61.228:8443"
	I0818 20:09:09.044606   73711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:09:09.054823   73711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.228
	I0818 20:09:09.054855   73711 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:09:09.054867   73711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:09:09.054919   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.096324   73711 cri.go:89] found id: ""
	I0818 20:09:09.096412   73711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:09:09.112752   73711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:09:09.122515   73711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:09:09.122537   73711 kubeadm.go:157] found existing configuration files:
	
	I0818 20:09:09.122578   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:09:09.131551   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:09:09.131604   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:09:09.140888   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:09:09.149865   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:09:09.149920   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:09:09.159008   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.168220   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:09:09.168279   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.177638   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:09:09.187508   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:09:09.187567   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:09:09.196657   73711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:09:09.206117   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:09.331465   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.242594   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.738983   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:09.682305   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.683106   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:10.574796   73711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243293266s)
	I0818 20:09:10.574822   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.778850   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.843088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.931752   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:09:10.931846   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.432245   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.932577   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.948423   73711 api_server.go:72] duration metric: took 1.016687944s to wait for apiserver process to appear ...
	I0818 20:09:11.948449   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:09:11.948477   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:11.948946   73711 api_server.go:269] stopped: https://192.168.61.228:8443/healthz: Get "https://192.168.61.228:8443/healthz": dial tcp 192.168.61.228:8443: connect: connection refused
	I0818 20:09:12.448725   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.739963   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.739993   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.740010   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.750388   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.750411   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.948679   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.956174   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:14.956205   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.449273   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.453840   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.453870   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:15.949138   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.958790   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.958813   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:16.449521   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:16.453975   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:09:16.460298   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:09:16.460323   73711 api_server.go:131] duration metric: took 4.511867816s to wait for apiserver health ...
	I0818 20:09:16.460330   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:16.460339   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:16.462141   73711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:09:13.740020   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.238126   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:13.683910   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.182408   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.463457   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:09:16.474867   73711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:09:16.494479   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:09:16.502870   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:09:16.502898   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:09:16.502906   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:09:16.502917   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:09:16.502926   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:09:16.502937   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:09:16.502951   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:09:16.502959   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:09:16.502964   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:09:16.502970   73711 system_pods.go:74] duration metric: took 8.468743ms to wait for pod list to return data ...
	I0818 20:09:16.502977   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:09:16.507863   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:09:16.507884   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:09:16.507893   73711 node_conditions.go:105] duration metric: took 4.912203ms to run NodePressure ...
	I0818 20:09:16.507907   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:16.779765   73711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790746   73711 kubeadm.go:739] kubelet initialised
	I0818 20:09:16.790771   73711 kubeadm.go:740] duration metric: took 10.982299ms waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790780   73711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:16.799544   73711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.806805   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806826   73711 pod_ready.go:82] duration metric: took 7.251632ms for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.806835   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806841   73711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.813614   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813646   73711 pod_ready.go:82] duration metric: took 6.794013ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.813656   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813664   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.818982   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819016   73711 pod_ready.go:82] duration metric: took 5.338981ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.819028   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819037   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.898401   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898433   73711 pod_ready.go:82] duration metric: took 79.37927ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.898446   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898454   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.297663   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297697   73711 pod_ready.go:82] duration metric: took 399.23365ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.297706   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297712   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.697884   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697909   73711 pod_ready.go:82] duration metric: took 400.191092ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.697919   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697925   73711 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:18.099008   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099034   73711 pod_ready.go:82] duration metric: took 401.09908ms for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:18.099044   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099050   73711 pod_ready.go:39] duration metric: took 1.30825923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:18.099071   73711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:09:18.111862   73711 ops.go:34] apiserver oom_adj: -16
	I0818 20:09:18.111888   73711 kubeadm.go:597] duration metric: took 9.081245207s to restartPrimaryControlPlane
	I0818 20:09:18.111901   73711 kubeadm.go:394] duration metric: took 9.136525478s to StartCluster
	I0818 20:09:18.111931   73711 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.112017   73711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:09:18.114460   73711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.114771   73711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:09:18.114885   73711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:09:18.114987   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:09:18.115022   73711 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944426"
	I0818 20:09:18.115036   73711 addons.go:69] Setting default-storageclass=true in profile "no-preload-944426"
	I0818 20:09:18.115059   73711 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944426"
	I0818 20:09:18.115075   73711 addons.go:69] Setting metrics-server=true in profile "no-preload-944426"
	W0818 20:09:18.115082   73711 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:09:18.115095   73711 addons.go:234] Setting addon metrics-server=true in "no-preload-944426"
	I0818 20:09:18.115067   73711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944426"
	W0818 20:09:18.115104   73711 addons.go:243] addon metrics-server should already be in state true
	I0818 20:09:18.115122   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115132   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115517   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115530   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115541   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115553   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115560   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115592   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.117511   73711 out.go:177] * Verifying Kubernetes components...
	I0818 20:09:18.118740   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:18.133596   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0818 20:09:18.134093   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.134661   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.134685   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.135066   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.135263   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.136138   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0818 20:09:18.136520   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.136981   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.137004   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.137353   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.137911   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.137957   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.138952   73711 addons.go:234] Setting addon default-storageclass=true in "no-preload-944426"
	W0818 20:09:18.138975   73711 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:09:18.139001   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.139356   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.139413   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.155618   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0818 20:09:18.156076   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.156666   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.156687   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.157086   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.157669   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.157700   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.158080   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0818 20:09:18.158422   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.158850   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.158868   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.158888   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0818 20:09:18.159237   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.159282   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.159455   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.159741   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.159763   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.160108   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.160582   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.160606   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.165108   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.166977   73711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:09:18.168139   73711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.168156   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:09:18.168174   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.171426   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172004   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.172041   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172082   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.172238   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.172336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.172423   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.175961   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0818 20:09:18.176421   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.176543   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0818 20:09:18.176861   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.176875   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.177065   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.177176   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.177345   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.177745   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.177762   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.178162   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.178336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.179445   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180238   73711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.180253   73711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:09:18.180275   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.181198   73711 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:09:18.182420   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:09:18.182447   73711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:09:18.182464   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.183457   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183499   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.183513   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183656   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.183820   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.183953   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.184112   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.185260   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185575   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.185588   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185754   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.185879   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.186013   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.186099   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.338778   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:18.356229   73711 node_ready.go:35] waiting up to 6m0s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:18.496927   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:09:18.496949   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:09:18.513205   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.540482   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:09:18.540505   73711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:09:18.544078   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.613315   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:18.613340   73711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:09:18.668416   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:19.638171   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094064475s)
	I0818 20:09:19.638274   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638299   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638177   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124933278s)
	I0818 20:09:19.638328   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638343   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638281   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638412   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638697   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638714   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638724   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638732   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638825   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638845   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638853   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638932   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638946   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638966   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638994   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639006   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638893   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.639016   73711 addons.go:475] Verifying addon metrics-server=true in "no-preload-944426"
	I0818 20:09:19.639024   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.639227   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.639401   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639416   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.640889   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.640905   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.640973   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647148   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.647169   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.647416   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.647460   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647448   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.649397   73711 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0818 20:09:19.650643   73711 addons.go:510] duration metric: took 1.535758897s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:18.240003   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.738804   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:18.184836   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.682274   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.360319   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:22.860498   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:22.739478   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.238989   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.239357   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:23.181992   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.182417   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.183071   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.360413   73711 node_ready.go:49] node "no-preload-944426" has status "Ready":"True"
	I0818 20:09:25.360449   73711 node_ready.go:38] duration metric: took 7.004187421s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:25.360462   73711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:25.366498   73711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:27.373766   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.873098   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:29.738977   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.238945   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.683136   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.182825   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:31.873262   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.372828   73711 pod_ready.go:93] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.372849   73711 pod_ready.go:82] duration metric: took 7.006326702s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.372858   73711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376709   73711 pod_ready.go:93] pod "etcd-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.376732   73711 pod_ready.go:82] duration metric: took 3.867173ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376743   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380703   73711 pod_ready.go:93] pod "kube-apiserver-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.380722   73711 pod_ready.go:82] duration metric: took 3.970732ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380733   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385137   73711 pod_ready.go:93] pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.385159   73711 pod_ready.go:82] duration metric: took 4.417483ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385171   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390646   73711 pod_ready.go:93] pod "kube-proxy-2l6g8" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.390702   73711 pod_ready.go:82] duration metric: took 5.522399ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390713   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772352   73711 pod_ready.go:93] pod "kube-scheduler-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.772374   73711 pod_ready.go:82] duration metric: took 381.654122ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772384   73711 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:34.779615   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:34.239095   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.738310   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:34.683291   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:37.181676   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.780369   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.278364   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:38.740139   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.238400   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.181867   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.182275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.781697   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:43.239274   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.739280   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.182744   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.684210   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:46.278261   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.279139   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:47.739502   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.239148   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.181581   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.181939   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.182295   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.779145   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.779195   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.779440   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:52.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:55.239445   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.682549   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.182480   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.278685   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.279456   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.739205   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.739780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:02.240023   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.182638   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.182974   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.778759   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:04.279122   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.240223   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.738829   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:03.183498   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:05.681527   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.682456   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.281289   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:08.778838   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:09.238297   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:11.240258   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.182214   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:12.182485   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.778909   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.279849   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:13.739554   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.240247   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:14.182841   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.682427   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:15.778555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:17.779315   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:18.739993   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.241320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:19.182059   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.681310   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:20.278905   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.779594   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:23.739354   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.739406   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:23.682161   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.683137   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.279240   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:27.778736   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.238417   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.240302   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:28.182371   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.182431   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.182538   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.277705   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.778467   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.739086   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:35.239575   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.682089   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:36.682424   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.277613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:39.778047   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:37.743430   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.240404   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:38.682667   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.182697   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.779091   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.780368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:42.739140   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:44.739349   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.739985   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.681897   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:45.682347   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.682385   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.278368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:48.777847   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:49.238888   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:51.239699   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.182672   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.683492   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.778683   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.778843   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:53.240375   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.738235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.181390   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.181940   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.278430   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.278961   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.778449   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:57.740046   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:00.239797   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.182402   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.182516   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.779677   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.277808   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.740702   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:07.239660   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:03.682270   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.682964   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:06.278085   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.781247   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:09.240113   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.739320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.182148   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:10.681873   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:12.682490   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.278330   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:13.278868   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:14.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:16.739103   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.688049   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.779050   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.779324   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:19.238499   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:21.239793   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.181477   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.181514   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.278360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.779285   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:23.739816   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.243360   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:24.182434   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.182980   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:25.277558   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.278088   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:29.278655   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:28.739673   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.238716   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:28.681620   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:30.682755   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:32.682969   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.778900   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.240237   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.739311   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.182357   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:37.682275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.278357   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:38.279370   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:38.239229   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.240416   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:39.682587   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.187237   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.779225   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.779359   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.779661   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:42.739642   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.240529   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.681898   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:46.683048   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:47.283360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.780428   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:47.739118   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.740073   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.238919   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.182997   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.682384   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.279233   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:54.779470   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.239638   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:56.240123   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:53.682822   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:55.683248   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.279035   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:59.779371   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:58.240182   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.240387   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:58.182085   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.682855   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:02.278506   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:04.279952   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:02.739623   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.239503   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.240563   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:03.182703   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.183099   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.682942   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:06.780051   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:09.283183   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:09.738372   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.738978   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:10.183297   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:12.682617   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.778895   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.779613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:14.239433   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:14.733714   73815 pod_ready.go:82] duration metric: took 4m0.000909376s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:14.733756   73815 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:14.733773   73815 pod_ready.go:39] duration metric: took 4m10.006922238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:14.733798   73815 kubeadm.go:597] duration metric: took 4m18.227938977s to restartPrimaryControlPlane
	W0818 20:12:14.733854   73815 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:14.733884   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:15.182539   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:17.682113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.278810   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:18.279513   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:19.682712   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.182462   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:20.777741   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.779468   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:24.785394   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:24.682001   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.182491   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.278406   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.279272   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.682104   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:32.181795   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:31.779163   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:33.782706   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:34.183088   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.682409   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.278136   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:38.278938   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.943045   73815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.209137834s)
	I0818 20:12:40.943131   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:40.961902   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:40.984956   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:41.000828   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:41.000855   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:41.000908   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:41.019730   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:41.019782   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:41.031694   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:41.052082   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:41.052133   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:41.061682   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.070983   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:41.071036   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.083122   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:41.092977   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:41.093041   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:41.103081   73815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:41.155300   73815 kubeadm.go:310] W0818 20:12:41.112032    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.156131   73815 kubeadm.go:310] W0818 20:12:41.113028    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.270071   73815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:39.183290   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:41.682301   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.777979   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:42.779754   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:44.779992   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:43.683501   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:46.181489   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.616338   73815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:12:49.616432   73815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:12:49.616546   73815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:12:49.616675   73815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:12:49.616784   73815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:12:49.616877   73815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:12:49.618287   73815 out.go:235]   - Generating certificates and keys ...
	I0818 20:12:49.618354   73815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:12:49.618414   73815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:12:49.618486   73815 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:12:49.618537   73815 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:12:49.618598   73815 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:12:49.618648   73815 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:12:49.618700   73815 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:12:49.618779   73815 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:12:49.618892   73815 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:12:49.619007   73815 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:12:49.619065   73815 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:12:49.619163   73815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:12:49.619214   73815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:12:49.619269   73815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:12:49.619331   73815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:12:49.619436   73815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:12:49.619486   73815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:12:49.619556   73815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:12:49.619619   73815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:12:49.621003   73815 out.go:235]   - Booting up control plane ...
	I0818 20:12:49.621109   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:12:49.621195   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:12:49.621272   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:12:49.621380   73815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:12:49.621464   73815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:12:49.621507   73815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:12:49.621621   73815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:12:49.621715   73815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:12:49.621773   73815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.427168ms
	I0818 20:12:49.621843   73815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:12:49.621894   73815 kubeadm.go:310] [api-check] The API server is healthy after 5.00297116s
	I0818 20:12:49.621989   73815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:12:49.622127   73815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:12:49.622192   73815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:12:49.622366   73815 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-291295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:12:49.622416   73815 kubeadm.go:310] [bootstrap-token] Using token: y7e2le.i0q1jk5v0c0u0zuw
	I0818 20:12:49.623896   73815 out.go:235]   - Configuring RBAC rules ...
	I0818 20:12:49.623979   73815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:12:49.624091   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:12:49.624245   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:12:49.624354   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:12:49.624455   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:12:49.624526   73815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:12:49.624621   73815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:12:49.624675   73815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:12:49.624718   73815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:12:49.624724   73815 kubeadm.go:310] 
	I0818 20:12:49.624819   73815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:12:49.624835   73815 kubeadm.go:310] 
	I0818 20:12:49.624933   73815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:12:49.624943   73815 kubeadm.go:310] 
	I0818 20:12:49.624975   73815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:12:49.625066   73815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:12:49.625122   73815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:12:49.625135   73815 kubeadm.go:310] 
	I0818 20:12:49.625210   73815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:12:49.625217   73815 kubeadm.go:310] 
	I0818 20:12:49.625285   73815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:12:49.625295   73815 kubeadm.go:310] 
	I0818 20:12:49.625364   73815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:12:49.625469   73815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:12:49.625552   73815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:12:49.625563   73815 kubeadm.go:310] 
	I0818 20:12:49.625675   73815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:12:49.625756   73815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:12:49.625763   73815 kubeadm.go:310] 
	I0818 20:12:49.625858   73815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.625943   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:12:49.625967   73815 kubeadm.go:310] 	--control-plane 
	I0818 20:12:49.625976   73815 kubeadm.go:310] 
	I0818 20:12:49.626089   73815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:12:49.626099   73815 kubeadm.go:310] 
	I0818 20:12:49.626196   73815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.626293   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:12:49.626302   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:12:49.626308   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:12:49.627714   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:12:47.280266   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.779502   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.628998   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:12:49.639640   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:12:49.657017   73815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-291295 minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=embed-certs-291295 minikube.k8s.io/primary=true
	I0818 20:12:49.685420   73815 ops.go:34] apiserver oom_adj: -16
	I0818 20:12:49.868146   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.368174   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.868256   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.368427   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.868632   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:52.368585   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:48.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:50.681743   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.683179   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.869122   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.368635   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.869162   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.368223   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.490893   73815 kubeadm.go:1113] duration metric: took 4.833865719s to wait for elevateKubeSystemPrivileges
	I0818 20:12:54.490919   73815 kubeadm.go:394] duration metric: took 4m58.032922921s to StartCluster
	I0818 20:12:54.490936   73815 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.491011   73815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:12:54.492769   73815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.493007   73815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:12:54.493069   73815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:12:54.493160   73815 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-291295"
	I0818 20:12:54.493186   73815 addons.go:69] Setting default-storageclass=true in profile "embed-certs-291295"
	I0818 20:12:54.493208   73815 addons.go:69] Setting metrics-server=true in profile "embed-certs-291295"
	I0818 20:12:54.493226   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:12:54.493234   73815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-291295"
	I0818 20:12:54.493250   73815 addons.go:234] Setting addon metrics-server=true in "embed-certs-291295"
	W0818 20:12:54.493263   73815 addons.go:243] addon metrics-server should already be in state true
	I0818 20:12:54.493293   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493197   73815 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-291295"
	W0818 20:12:54.493423   73815 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:12:54.493454   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493667   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493695   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493799   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493824   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493839   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493856   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.494988   73815 out.go:177] * Verifying Kubernetes components...
	I0818 20:12:54.496631   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0818 20:12:54.510362   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0818 20:12:54.510861   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510893   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510904   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.511362   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511394   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511392   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511411   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511512   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511532   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511721   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511770   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511858   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.512040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.512246   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512269   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512275   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.512287   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.515662   73815 addons.go:234] Setting addon default-storageclass=true in "embed-certs-291295"
	W0818 20:12:54.515684   73815 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:12:54.515713   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.516066   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.516113   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.532752   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0818 20:12:54.532798   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0818 20:12:54.533454   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.533570   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.534099   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534122   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534237   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534256   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534374   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534590   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534626   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.534665   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0818 20:12:54.534909   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.535373   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.535793   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.535808   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.536326   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.536411   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.536941   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.538860   73815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:12:54.538862   73815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:12:52.279487   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.279652   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.539061   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.539290   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.540006   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:12:54.540024   73815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:12:54.540043   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.540104   73815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.540119   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:12:54.540144   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.543782   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544017   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544131   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544154   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544293   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544565   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.544734   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.544754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544887   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.545060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.545257   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.545502   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.558292   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0818 20:12:54.558721   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.559184   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.559200   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.559579   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.559764   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.561412   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.562138   73815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.562153   73815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:12:54.562169   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.565078   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565524   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.565543   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565782   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.565954   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.566107   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.566265   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.738286   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:12:54.804581   73815 node_ready.go:35] waiting up to 6m0s for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813953   73815 node_ready.go:49] node "embed-certs-291295" has status "Ready":"True"
	I0818 20:12:54.813984   73815 node_ready.go:38] duration metric: took 9.367719ms for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813995   73815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:54.820670   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:12:54.884787   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:12:54.884808   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:12:54.891500   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.917894   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.939854   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:12:54.939873   73815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:12:55.023663   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:55.023684   73815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:12:55.049846   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:56.106099   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.188173933s)
	I0818 20:12:56.106164   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106173   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106502   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106504   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.106519   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.106529   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106537   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106774   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106788   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107412   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21588373s)
	I0818 20:12:56.107447   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107459   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.107656   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.107729   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.107739   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107747   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.108054   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.108095   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.108105   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.163788   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.163816   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.164087   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.164137   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239269   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189381338s)
	I0818 20:12:56.239327   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239341   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.239712   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.239767   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239748   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.239782   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239792   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.240000   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.240017   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.240028   73815 addons.go:475] Verifying addon metrics-server=true in "embed-certs-291295"
	I0818 20:12:56.241750   73815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:12:56.243157   73815 addons.go:510] duration metric: took 1.750082977s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:12:56.827912   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:55.184449   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:57.676039   74485 pod_ready.go:82] duration metric: took 4m0.000245975s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:57.676064   74485 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:57.676106   74485 pod_ready.go:39] duration metric: took 4m11.533331444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:57.676138   74485 kubeadm.go:597] duration metric: took 4m20.628972956s to restartPrimaryControlPlane
	W0818 20:12:57.676203   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:57.676230   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:56.778171   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:58.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:59.328683   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.331560   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.281134   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.281507   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.828543   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.828572   73815 pod_ready.go:82] duration metric: took 9.007869564s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.828586   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833396   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.833416   73815 pod_ready.go:82] duration metric: took 4.823533ms for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833426   73815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837837   73815 pod_ready.go:93] pod "etcd-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.837856   73815 pod_ready.go:82] duration metric: took 4.422926ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837864   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842646   73815 pod_ready.go:93] pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.842666   73815 pod_ready.go:82] duration metric: took 4.795789ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842675   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846697   73815 pod_ready.go:93] pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.846721   73815 pod_ready.go:82] duration metric: took 4.038999ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846733   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224066   73815 pod_ready.go:93] pod "kube-proxy-8mv85" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.224088   73815 pod_ready.go:82] duration metric: took 377.347897ms for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224097   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624310   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.624337   73815 pod_ready.go:82] duration metric: took 400.233574ms for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624347   73815 pod_ready.go:39] duration metric: took 9.810340936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:04.624363   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:04.624440   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:04.640514   73815 api_server.go:72] duration metric: took 10.147475745s to wait for apiserver process to appear ...
	I0818 20:13:04.640543   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:04.640565   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:13:04.646120   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:13:04.646969   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:04.646989   73815 api_server.go:131] duration metric: took 6.438722ms to wait for apiserver health ...
	I0818 20:13:04.646999   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:04.828347   73815 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:04.828385   73815 system_pods.go:61] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:04.828393   73815 system_pods.go:61] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:04.828398   73815 system_pods.go:61] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:04.828403   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:04.828416   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:04.828420   73815 system_pods.go:61] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:04.828425   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:04.828434   73815 system_pods.go:61] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:04.828441   73815 system_pods.go:61] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:04.828453   73815 system_pods.go:74] duration metric: took 181.44906ms to wait for pod list to return data ...
	I0818 20:13:04.828465   73815 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:05.030945   73815 default_sa.go:45] found service account: "default"
	I0818 20:13:05.030971   73815 default_sa.go:55] duration metric: took 202.497269ms for default service account to be created ...
	I0818 20:13:05.030981   73815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:05.226724   73815 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:05.226760   73815 system_pods.go:89] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:05.226769   73815 system_pods.go:89] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:05.226775   73815 system_pods.go:89] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:05.226781   73815 system_pods.go:89] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:05.226790   73815 system_pods.go:89] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:05.226795   73815 system_pods.go:89] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:05.226801   73815 system_pods.go:89] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:05.226810   73815 system_pods.go:89] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:05.226820   73815 system_pods.go:89] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:05.226831   73815 system_pods.go:126] duration metric: took 195.843628ms to wait for k8s-apps to be running ...
	I0818 20:13:05.226843   73815 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:05.226892   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:05.242656   73815 system_svc.go:56] duration metric: took 15.80684ms WaitForService to wait for kubelet
	I0818 20:13:05.242681   73815 kubeadm.go:582] duration metric: took 10.749648174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:05.242698   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:05.424616   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:05.424642   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:05.424654   73815 node_conditions.go:105] duration metric: took 181.951421ms to run NodePressure ...
	I0818 20:13:05.424668   73815 start.go:241] waiting for startup goroutines ...
	I0818 20:13:05.424678   73815 start.go:246] waiting for cluster config update ...
	I0818 20:13:05.424692   73815 start.go:255] writing updated cluster config ...
	I0818 20:13:05.425003   73815 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:05.470859   73815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:05.472909   73815 out.go:177] * Done! kubectl is now configured to use "embed-certs-291295" cluster and "default" namespace by default
	I0818 20:13:05.779555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:07.783567   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:10.281617   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:12.780570   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:15.282024   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:17.779399   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:23.788389   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.112134895s)
	I0818 20:13:23.788470   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:23.808611   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:13:23.820139   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:13:23.837253   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:13:23.837282   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:13:23.837345   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:13:23.848522   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:13:23.848595   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:13:23.857891   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:13:23.866756   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:13:23.866814   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:13:23.876332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.885435   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:13:23.885535   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.896120   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:13:23.905471   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:13:23.905565   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:13:23.915157   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:13:23.963756   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:13:23.963830   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:13:24.083423   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:13:24.083592   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:13:24.083733   74485 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:13:24.097967   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:13:24.099859   74485 out.go:235]   - Generating certificates and keys ...
	I0818 20:13:24.099926   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:13:24.100020   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:13:24.100125   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:13:24.100212   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:13:24.100310   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:13:24.100389   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:13:24.100476   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:13:24.100592   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:13:24.100711   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:13:24.100829   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:13:24.100891   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:13:24.100978   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:13:24.298737   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:13:24.592511   74485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:13:24.686316   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:13:24.796124   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:13:24.910646   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:13:24.911060   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:13:24.913486   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:13:20.281479   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:22.779269   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:24.914894   74485 out.go:235]   - Booting up control plane ...
	I0818 20:13:24.915018   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:13:24.915106   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:13:24.915303   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:13:24.938289   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:13:24.944304   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:13:24.944367   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:13:25.078685   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:13:25.078813   74485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:13:25.580725   74485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092954ms
	I0818 20:13:25.580847   74485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:13:25.280695   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:27.285875   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:29.779058   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:30.583574   74485 kubeadm.go:310] [api-check] The API server is healthy after 5.001121585s
	I0818 20:13:30.596453   74485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:13:30.616459   74485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:13:30.647753   74485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:13:30.648063   74485 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-852598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:13:30.661702   74485 kubeadm.go:310] [bootstrap-token] Using token: zx02gp.uvda3nvhhfc3i2l5
	I0818 20:13:30.663166   74485 out.go:235]   - Configuring RBAC rules ...
	I0818 20:13:30.663321   74485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:13:30.671440   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:13:30.682462   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:13:30.690376   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:13:30.699091   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:13:30.704304   74485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:13:30.989576   74485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:13:31.435191   74485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:13:31.989155   74485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:13:31.991090   74485 kubeadm.go:310] 
	I0818 20:13:31.991172   74485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:13:31.991188   74485 kubeadm.go:310] 
	I0818 20:13:31.991285   74485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:13:31.991303   74485 kubeadm.go:310] 
	I0818 20:13:31.991337   74485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:13:31.991506   74485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:13:31.991584   74485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:13:31.991605   74485 kubeadm.go:310] 
	I0818 20:13:31.991710   74485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:13:31.991732   74485 kubeadm.go:310] 
	I0818 20:13:31.991802   74485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:13:31.991814   74485 kubeadm.go:310] 
	I0818 20:13:31.991881   74485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:13:31.991986   74485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:13:31.992101   74485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:13:31.992132   74485 kubeadm.go:310] 
	I0818 20:13:31.992250   74485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:13:31.992345   74485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:13:31.992358   74485 kubeadm.go:310] 
	I0818 20:13:31.992464   74485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.992601   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:13:31.992637   74485 kubeadm.go:310] 	--control-plane 
	I0818 20:13:31.992650   74485 kubeadm.go:310] 
	I0818 20:13:31.992760   74485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:13:31.992778   74485 kubeadm.go:310] 
	I0818 20:13:31.992882   74485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.993030   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:13:31.994898   74485 kubeadm.go:310] W0818 20:13:23.918436    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995217   74485 kubeadm.go:310] W0818 20:13:23.919152    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995365   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:13:31.995413   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:13:31.995423   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:13:31.997188   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:13:31.998506   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:13:32.011472   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:13:32.031405   74485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:13:32.031449   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.031494   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-852598 minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=default-k8s-diff-port-852598 minikube.k8s.io/primary=true
	I0818 20:13:32.244997   74485 ops.go:34] apiserver oom_adj: -16
	I0818 20:13:32.245096   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.745775   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.279538   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:32.779152   73711 pod_ready.go:82] duration metric: took 4m0.006755386s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:13:32.779180   73711 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 20:13:32.779190   73711 pod_ready.go:39] duration metric: took 4m7.418715902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:32.779207   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:32.779240   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:32.779298   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:32.848109   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:32.848132   73711 cri.go:89] found id: ""
	I0818 20:13:32.848141   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:32.848201   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.852725   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:32.852789   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:32.899932   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:32.899957   73711 cri.go:89] found id: ""
	I0818 20:13:32.899969   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:32.900028   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.904698   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:32.904771   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:32.945320   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:32.945347   73711 cri.go:89] found id: ""
	I0818 20:13:32.945355   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:32.945411   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.949873   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:32.949935   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:32.986388   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:32.986409   73711 cri.go:89] found id: ""
	I0818 20:13:32.986415   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:32.986465   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.992213   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:32.992292   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:33.035535   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:33.035557   73711 cri.go:89] found id: ""
	I0818 20:13:33.035564   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:33.035622   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.039933   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:33.040006   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:33.077372   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:33.077395   73711 cri.go:89] found id: ""
	I0818 20:13:33.077404   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:33.077468   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.082254   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:33.082327   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:33.120142   73711 cri.go:89] found id: ""
	I0818 20:13:33.120181   73711 logs.go:276] 0 containers: []
	W0818 20:13:33.120192   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:33.120199   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:33.120267   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:33.159065   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:33.159089   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.159095   73711 cri.go:89] found id: ""
	I0818 20:13:33.159104   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:33.159164   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.163366   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.167301   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:33.167327   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:33.207982   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:33.208012   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:33.734525   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:33.734563   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:33.779286   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:33.779334   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:33.915330   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:33.915365   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:33.930057   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:33.930088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:33.978282   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:33.978312   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:34.021464   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:34.021495   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:34.058242   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:34.058271   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:34.094203   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:34.094231   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:34.157812   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:34.157849   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:34.196259   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:34.196288   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:34.273774   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:34.273818   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.245388   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:33.745166   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.245920   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.745548   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.245436   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.745269   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.245383   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.384146   74485 kubeadm.go:1113] duration metric: took 4.352781371s to wait for elevateKubeSystemPrivileges
	I0818 20:13:36.384182   74485 kubeadm.go:394] duration metric: took 4m59.395903283s to StartCluster
	I0818 20:13:36.384199   74485 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.384286   74485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:13:36.385964   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.386201   74485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:13:36.386320   74485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:13:36.386400   74485 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386423   74485 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386440   74485 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386458   74485 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386470   74485 addons.go:243] addon metrics-server should already be in state true
	I0818 20:13:36.386477   74485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-852598"
	I0818 20:13:36.386514   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386434   74485 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386567   74485 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:13:36.386612   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386435   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:13:36.386858   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386887   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386915   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386948   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386982   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.387015   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.387748   74485 out.go:177] * Verifying Kubernetes components...
	I0818 20:13:36.389177   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:13:36.402895   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0818 20:13:36.402928   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0818 20:13:36.403477   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.403479   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404111   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404120   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404519   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404525   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404795   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.405161   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.405192   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.405739   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0818 20:13:36.406246   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.406753   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.406779   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.407167   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.407726   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.407771   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.408687   74485 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.408710   74485 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:13:36.408736   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.409073   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.409120   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.423471   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 20:13:36.423953   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.424569   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.424588   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.424652   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0818 20:13:36.424966   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.425039   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.425257   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.425447   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.425462   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.425911   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.426098   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.427104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.427772   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.428108   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0818 20:13:36.428438   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.428794   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.428816   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.429092   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.429645   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.429696   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.429708   74485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:13:36.429758   74485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:13:36.431859   74485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.431879   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:13:36.431898   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.431958   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:13:36.431969   74485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:13:36.431983   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.435295   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435730   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.435757   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436192   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.436238   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.436254   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.436312   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.436528   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436570   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.436890   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.437171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.437355   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.447762   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0818 20:13:36.448303   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.448694   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.448713   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.449011   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.449160   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.450722   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.450918   74485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.450935   74485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:13:36.450954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.453529   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.453969   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.453992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.454163   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.454862   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.455104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.455246   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.606178   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:13:36.628852   74485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702927   74485 node_ready.go:49] node "default-k8s-diff-port-852598" has status "Ready":"True"
	I0818 20:13:36.702956   74485 node_ready.go:38] duration metric: took 74.077289ms for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702968   74485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:36.713446   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:36.726670   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:13:36.726689   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:13:36.741673   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.784451   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.790772   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:13:36.790798   74485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:13:36.845289   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:36.845315   74485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:13:36.914259   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:37.542511   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542538   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542559   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542543   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542874   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542914   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542922   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542932   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542941   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542953   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542963   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542971   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.543114   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.543123   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545016   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.545041   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545059   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572618   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.572643   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.572953   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572976   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.572989   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.793891   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.793918   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794436   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.794453   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794467   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794479   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.794487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794747   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794762   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794774   74485 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-852598"
	I0818 20:13:37.796423   74485 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:13:36.814874   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:36.838208   73711 api_server.go:72] duration metric: took 4m18.723396382s to wait for apiserver process to appear ...
	I0818 20:13:36.838234   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:36.838276   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:36.838334   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:36.890010   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:36.890036   73711 cri.go:89] found id: ""
	I0818 20:13:36.890046   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:36.890108   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.895675   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:36.895753   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:36.953110   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:36.953162   73711 cri.go:89] found id: ""
	I0818 20:13:36.953172   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:36.953230   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.959359   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:36.959456   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:37.011217   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.011248   73711 cri.go:89] found id: ""
	I0818 20:13:37.011258   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:37.011333   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.016895   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:37.016988   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:37.067705   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:37.067728   73711 cri.go:89] found id: ""
	I0818 20:13:37.067737   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:37.067794   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.073259   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:37.073332   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:37.112192   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.112216   73711 cri.go:89] found id: ""
	I0818 20:13:37.112226   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:37.112285   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.116988   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:37.117060   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:37.153720   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.153744   73711 cri.go:89] found id: ""
	I0818 20:13:37.153753   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:37.153811   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.158160   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:37.158226   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:37.197088   73711 cri.go:89] found id: ""
	I0818 20:13:37.197120   73711 logs.go:276] 0 containers: []
	W0818 20:13:37.197143   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:37.197151   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:37.197215   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:37.241214   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:37.241242   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:37.241248   73711 cri.go:89] found id: ""
	I0818 20:13:37.241257   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:37.241317   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.246159   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.250431   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:37.250460   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:37.313787   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:37.313817   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:37.333235   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:37.333263   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:37.461197   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:37.461236   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.505314   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:37.505343   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.576096   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:37.576121   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:38.083667   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:38.083702   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:38.128922   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:38.128947   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:38.170807   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:38.170842   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:38.265750   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:38.265784   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:38.323224   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:38.323269   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:38.372486   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:38.372530   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:38.413945   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:38.413986   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.798152   74485 addons.go:510] duration metric: took 1.411833485s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:13:38.719805   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.720446   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:40.720472   74485 pod_ready.go:82] duration metric: took 4.00699808s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:40.720482   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:42.728159   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.955186   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:13:40.960201   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:13:40.961240   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:40.961260   73711 api_server.go:131] duration metric: took 4.123017717s to wait for apiserver health ...
	I0818 20:13:40.961273   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:40.961298   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:40.961350   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:41.012093   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.012113   73711 cri.go:89] found id: ""
	I0818 20:13:41.012121   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:41.012172   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.016282   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:41.016337   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:41.063834   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.063861   73711 cri.go:89] found id: ""
	I0818 20:13:41.063871   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:41.063930   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.068645   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:41.068724   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:41.117544   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:41.117565   73711 cri.go:89] found id: ""
	I0818 20:13:41.117573   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:41.117626   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.121916   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:41.121985   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:41.161641   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:41.161660   73711 cri.go:89] found id: ""
	I0818 20:13:41.161667   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:41.161720   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.165727   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:41.165778   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:41.207519   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:41.207544   73711 cri.go:89] found id: ""
	I0818 20:13:41.207554   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:41.207615   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.212114   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:41.212171   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:41.255480   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:41.255501   73711 cri.go:89] found id: ""
	I0818 20:13:41.255508   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:41.255560   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.259585   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:41.259635   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:41.312099   73711 cri.go:89] found id: ""
	I0818 20:13:41.312124   73711 logs.go:276] 0 containers: []
	W0818 20:13:41.312131   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:41.312137   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:41.312201   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:41.358622   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:41.358647   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.358653   73711 cri.go:89] found id: ""
	I0818 20:13:41.358662   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:41.358723   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.363210   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.367271   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:41.367294   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.406329   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:41.406355   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:41.768140   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:41.768175   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:41.811010   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:41.811035   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:41.886206   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:41.886240   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.938249   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:41.938284   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.977289   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:41.977317   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:42.018606   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:42.018630   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:42.055557   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:42.055581   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:42.070467   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:42.070494   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:42.182068   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:42.182100   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:42.219346   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:42.219373   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:42.262193   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:42.262221   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:44.839152   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:13:44.839181   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.839186   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.839191   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.839194   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.839197   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.839200   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.839206   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.839212   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.839218   73711 system_pods.go:74] duration metric: took 3.877940537s to wait for pod list to return data ...
	I0818 20:13:44.839225   73711 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:44.841877   73711 default_sa.go:45] found service account: "default"
	I0818 20:13:44.841896   73711 default_sa.go:55] duration metric: took 2.662355ms for default service account to be created ...
	I0818 20:13:44.841904   73711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:44.846214   73711 system_pods.go:86] 8 kube-system pods found
	I0818 20:13:44.846240   73711 system_pods.go:89] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.846247   73711 system_pods.go:89] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.846252   73711 system_pods.go:89] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.846259   73711 system_pods.go:89] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.846264   73711 system_pods.go:89] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.846269   73711 system_pods.go:89] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.846279   73711 system_pods.go:89] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.846286   73711 system_pods.go:89] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.846296   73711 system_pods.go:126] duration metric: took 4.386348ms to wait for k8s-apps to be running ...
	I0818 20:13:44.846305   73711 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:44.846356   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:44.863225   73711 system_svc.go:56] duration metric: took 16.912117ms WaitForService to wait for kubelet
	I0818 20:13:44.863262   73711 kubeadm.go:582] duration metric: took 4m26.748456958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:44.863287   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:44.866049   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:44.866069   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:44.866082   73711 node_conditions.go:105] duration metric: took 2.789471ms to run NodePressure ...
	I0818 20:13:44.866095   73711 start.go:241] waiting for startup goroutines ...
	I0818 20:13:44.866103   73711 start.go:246] waiting for cluster config update ...
	I0818 20:13:44.866135   73711 start.go:255] writing updated cluster config ...
	I0818 20:13:44.866415   73711 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:44.914902   73711 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:44.916929   73711 out.go:177] * Done! kubectl is now configured to use "no-preload-944426" cluster and "default" namespace by default
	I0818 20:13:45.226521   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:47.226773   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:48.227026   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.227050   74485 pod_ready.go:82] duration metric: took 7.506560684s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.227061   74485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231313   74485 pod_ready.go:93] pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.231336   74485 pod_ready.go:82] duration metric: took 4.268255ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231345   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235228   74485 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.235249   74485 pod_ready.go:82] duration metric: took 3.897729ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235259   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238872   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.238889   74485 pod_ready.go:82] duration metric: took 3.623044ms for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238897   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243264   74485 pod_ready.go:93] pod "kube-proxy-hmvsl" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.243282   74485 pod_ready.go:82] duration metric: took 4.378808ms for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243292   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625076   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.625101   74485 pod_ready.go:82] duration metric: took 381.800619ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625111   74485 pod_ready.go:39] duration metric: took 11.92213071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:48.625128   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:48.625193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:48.640038   74485 api_server.go:72] duration metric: took 12.253809178s to wait for apiserver process to appear ...
	I0818 20:13:48.640061   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:48.640081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:13:48.644433   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:13:48.645289   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:48.645306   74485 api_server.go:131] duration metric: took 5.239358ms to wait for apiserver health ...
	I0818 20:13:48.645313   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:48.829655   74485 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:48.829698   74485 system_pods.go:61] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:48.829706   74485 system_pods.go:61] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:48.829718   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:48.829726   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:48.829731   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:48.829737   74485 system_pods.go:61] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:48.829742   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:48.829754   74485 system_pods.go:61] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:48.829760   74485 system_pods.go:61] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:48.829770   74485 system_pods.go:74] duration metric: took 184.451133ms to wait for pod list to return data ...
	I0818 20:13:48.829783   74485 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:49.023954   74485 default_sa.go:45] found service account: "default"
	I0818 20:13:49.023982   74485 default_sa.go:55] duration metric: took 194.191689ms for default service account to be created ...
	I0818 20:13:49.023992   74485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:49.227864   74485 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:49.227892   74485 system_pods.go:89] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:49.227898   74485 system_pods.go:89] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:49.227902   74485 system_pods.go:89] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:49.227907   74485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:49.227911   74485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:49.227915   74485 system_pods.go:89] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:49.227918   74485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:49.227925   74485 system_pods.go:89] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:49.227930   74485 system_pods.go:89] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:49.227936   74485 system_pods.go:126] duration metric: took 203.939768ms to wait for k8s-apps to be running ...
	I0818 20:13:49.227945   74485 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:49.227989   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:49.242762   74485 system_svc.go:56] duration metric: took 14.808746ms WaitForService to wait for kubelet
	I0818 20:13:49.242793   74485 kubeadm.go:582] duration metric: took 12.856565711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:49.242819   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:49.425517   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:49.425543   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:49.425555   74485 node_conditions.go:105] duration metric: took 182.731125ms to run NodePressure ...
	I0818 20:13:49.425569   74485 start.go:241] waiting for startup goroutines ...
	I0818 20:13:49.425577   74485 start.go:246] waiting for cluster config update ...
	I0818 20:13:49.425588   74485 start.go:255] writing updated cluster config ...
	I0818 20:13:49.425898   74485 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:49.473176   74485 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:49.475285   74485 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-852598" cluster and "default" namespace by default
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 
	
	
	==> CRI-O <==
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.914240055Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012179914221139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1db92de9-c526-4def-aa35-d43e4cb8460d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.914805488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=261c64bb-04ef-491d-9189-ef74fca8d770 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.914853293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=261c64bb-04ef-491d-9189-ef74fca8d770 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.914882319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=261c64bb-04ef-491d-9189-ef74fca8d770 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.948772518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8c28228-75df-4e51-8fd5-aa6498370a9a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.948843055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8c28228-75df-4e51-8fd5-aa6498370a9a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.949750444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9808f6bb-3207-4242-86f9-55ffc09e7ab9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.950106545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012179950086653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9808f6bb-3207-4242-86f9-55ffc09e7ab9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.951031162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40f8e41a-e6e7-4bcf-9a9c-b0743103e6cc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.951078757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40f8e41a-e6e7-4bcf-9a9c-b0743103e6cc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.951115915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=40f8e41a-e6e7-4bcf-9a9c-b0743103e6cc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.983115314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5b4c4ba-23ff-4e97-8c83-264c963bf09c name=/runtime.v1.RuntimeService/Version
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.983203115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5b4c4ba-23ff-4e97-8c83-264c963bf09c name=/runtime.v1.RuntimeService/Version
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.984664199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ab7c62c-d35f-46e3-adbb-a3142f5a0d03 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.985014152Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012179984992418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ab7c62c-d35f-46e3-adbb-a3142f5a0d03 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.985819701Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1df678c9-5b49-4e75-8e05-77c15d384c21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.985889954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1df678c9-5b49-4e75-8e05-77c15d384c21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:19 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:19.985925075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1df678c9-5b49-4e75-8e05-77c15d384c21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:20 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:20.019307418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f65296d8-0159-435b-9b55-fa38948903e1 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:16:20 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:20.019396507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f65296d8-0159-435b-9b55-fa38948903e1 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:16:20 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:20.020412114Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db33240d-2957-4b1a-91f1-402e67efb6a0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:16:20 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:20.020851008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012180020829756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db33240d-2957-4b1a-91f1-402e67efb6a0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:16:20 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:20.021326397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8cd99eb0-3cab-448a-861f-fa07bcc5fa77 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:20 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:20.021389990Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8cd99eb0-3cab-448a-861f-fa07bcc5fa77 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:16:20 old-k8s-version-247539 crio[653]: time="2024-08-18 20:16:20.021426236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8cd99eb0-3cab-448a-861f-fa07bcc5fa77 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug18 20:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051405] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041581] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.935576] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug18 20:08] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.637295] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.911494] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.071095] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080090] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.174365] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.151707] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.249665] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.351764] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.067129] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.161515] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[ +12.130980] kauditd_printk_skb: 46 callbacks suppressed
	[Aug18 20:12] systemd-fstab-generator[5096]: Ignoring "noauto" option for root device
	[Aug18 20:14] systemd-fstab-generator[5379]: Ignoring "noauto" option for root device
	[  +0.062456] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:16:20 up 8 min,  0 users,  load average: 0.00, 0.08, 0.06
	Linux old-k8s-version-247539 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00071b180)
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]: goroutine 146 [syscall]:
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]: syscall.Syscall6(0xe8, 0xd, 0xc000a59b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x7, 0x0, 0x0)
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc000a59b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000b7a280, 0x0, 0x0, 0x0)
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000b88780)
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Aug 18 20:16:17 old-k8s-version-247539 kubelet[5563]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Aug 18 20:16:17 old-k8s-version-247539 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 18 20:16:17 old-k8s-version-247539 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 18 20:16:17 old-k8s-version-247539 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 18 20:16:17 old-k8s-version-247539 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 18 20:16:17 old-k8s-version-247539 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 18 20:16:18 old-k8s-version-247539 kubelet[5619]: I0818 20:16:18.082110    5619 server.go:416] Version: v1.20.0
	Aug 18 20:16:18 old-k8s-version-247539 kubelet[5619]: I0818 20:16:18.082445    5619 server.go:837] Client rotation is on, will bootstrap in background
	Aug 18 20:16:18 old-k8s-version-247539 kubelet[5619]: I0818 20:16:18.089568    5619 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 18 20:16:18 old-k8s-version-247539 kubelet[5619]: I0818 20:16:18.090684    5619 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 18 20:16:18 old-k8s-version-247539 kubelet[5619]: W0818 20:16:18.090716    5619 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 2 (228.015443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-247539" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (705.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0818 20:13:16.212678   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291295 -n embed-certs-291295
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-18 20:22:06.009765279 +0000 UTC m=+6236.402104548
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291295 -n embed-certs-291295
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-291295 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-291295 logs -n 25: (2.255023588s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-944426             | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-868662                  | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-868662 --memory=2200 --alsologtostderr   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-291295            | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-868662 image list                           | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:02 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-852598  | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-247539        | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944426                  | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-291295                 | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:03 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-852598       | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-247539             | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:13 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 20:04:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 20:04:42.787579   74485 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:42.787666   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787673   74485 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:42.787677   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787847   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:42.788352   74485 out.go:352] Setting JSON to false
	I0818 20:04:42.789201   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6427,"bootTime":1724005056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:42.789257   74485 start.go:139] virtualization: kvm guest
	I0818 20:04:42.791538   74485 out.go:177] * [default-k8s-diff-port-852598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:42.793185   74485 notify.go:220] Checking for updates...
	I0818 20:04:42.793204   74485 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:42.794555   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:42.795955   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:42.797158   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:42.798459   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:42.799775   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:42.801373   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:04:42.801763   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.801823   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.816564   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0818 20:04:42.816964   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.817465   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.817486   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.817807   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.818015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.818224   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:42.818511   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.818540   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.832964   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0818 20:04:42.833369   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.833866   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.833895   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.834252   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.834438   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.867522   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:42.868931   74485 start.go:297] selected driver: kvm2
	I0818 20:04:42.868948   74485 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.869074   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:42.869754   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.869835   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:42.884983   74485 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:42.885345   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:42.885408   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:04:42.885421   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:42.885450   74485 start.go:340] cluster config:
	{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.885567   74485 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.887511   74485 out.go:177] * Starting "default-k8s-diff-port-852598" primary control-plane node in "default-k8s-diff-port-852598" cluster
	I0818 20:04:42.011628   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:45.083629   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:42.888803   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:04:42.888828   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:42.888834   74485 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:42.888903   74485 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:42.888913   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 20:04:42.888991   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:04:42.889163   74485 start.go:360] acquireMachinesLock for default-k8s-diff-port-852598: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:04:51.163614   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:54.235770   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:00.315808   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:03.387719   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:09.467686   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:12.539667   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:18.619652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:21.691652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:27.771635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:30.843627   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:36.923644   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:39.995678   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:46.075611   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:49.147665   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:55.227683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:58.299638   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:04.379690   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:07.451735   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:13.531669   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:16.603729   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:22.683639   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:25.755659   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:31.835708   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:34.907693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:40.987635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:44.059673   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:50.139693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:53.211683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:59.291707   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:02.363660   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:08.443634   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:11.515633   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:17.595640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:20.667689   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:26.747640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:29.819663   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:32.823816   73815 start.go:364] duration metric: took 4m30.025550701s to acquireMachinesLock for "embed-certs-291295"
	I0818 20:07:32.823869   73815 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:32.823875   73815 fix.go:54] fixHost starting: 
	I0818 20:07:32.824270   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:32.824306   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:32.839755   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0818 20:07:32.840171   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:32.840614   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:07:32.840632   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:32.840962   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:32.841160   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:32.841303   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:07:32.842786   73815 fix.go:112] recreateIfNeeded on embed-certs-291295: state=Stopped err=<nil>
	I0818 20:07:32.842814   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	W0818 20:07:32.842974   73815 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:32.844743   73815 out.go:177] * Restarting existing kvm2 VM for "embed-certs-291295" ...
	I0818 20:07:32.821304   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:32.821364   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821657   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:07:32.821683   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821904   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:07:32.823683   73711 machine.go:96] duration metric: took 4m37.430465042s to provisionDockerMachine
	I0818 20:07:32.823720   73711 fix.go:56] duration metric: took 4m37.451071449s for fixHost
	I0818 20:07:32.823727   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 4m37.451091077s
	W0818 20:07:32.823754   73711 start.go:714] error starting host: provision: host is not running
	W0818 20:07:32.823846   73711 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0818 20:07:32.823855   73711 start.go:729] Will try again in 5 seconds ...
	I0818 20:07:32.846149   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Start
	I0818 20:07:32.846317   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring networks are active...
	I0818 20:07:32.847049   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network default is active
	I0818 20:07:32.847478   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network mk-embed-certs-291295 is active
	I0818 20:07:32.847854   73815 main.go:141] libmachine: (embed-certs-291295) Getting domain xml...
	I0818 20:07:32.848748   73815 main.go:141] libmachine: (embed-certs-291295) Creating domain...
	I0818 20:07:34.053380   73815 main.go:141] libmachine: (embed-certs-291295) Waiting to get IP...
	I0818 20:07:34.054322   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.054765   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.054850   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.054751   75081 retry.go:31] will retry after 299.809444ms: waiting for machine to come up
	I0818 20:07:34.356537   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.356955   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.357014   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.356932   75081 retry.go:31] will retry after 366.714086ms: waiting for machine to come up
	I0818 20:07:34.725440   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.725885   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.725915   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.725839   75081 retry.go:31] will retry after 427.074526ms: waiting for machine to come up
	I0818 20:07:35.154258   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.154660   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.154682   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.154633   75081 retry.go:31] will retry after 565.117984ms: waiting for machine to come up
	I0818 20:07:35.721302   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.721729   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.721757   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.721686   75081 retry.go:31] will retry after 630.987814ms: waiting for machine to come up
	I0818 20:07:36.354566   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:36.354981   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:36.355016   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:36.354951   75081 retry.go:31] will retry after 697.865559ms: waiting for machine to come up
	I0818 20:07:37.054868   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.055232   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.055260   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.055188   75081 retry.go:31] will retry after 898.995052ms: waiting for machine to come up
	I0818 20:07:37.824187   73711 start.go:360] acquireMachinesLock for no-preload-944426: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:37.955672   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.956089   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.956115   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.956038   75081 retry.go:31] will retry after 1.482185836s: waiting for machine to come up
	I0818 20:07:39.440488   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:39.440838   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:39.440889   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:39.440794   75081 retry.go:31] will retry after 1.695604547s: waiting for machine to come up
	I0818 20:07:41.138708   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:41.139203   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:41.139231   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:41.139166   75081 retry.go:31] will retry after 1.806916927s: waiting for machine to come up
	I0818 20:07:42.947942   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:42.948344   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:42.948402   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:42.948319   75081 retry.go:31] will retry after 2.664923271s: waiting for machine to come up
	I0818 20:07:45.616102   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:45.616454   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:45.616482   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:45.616411   75081 retry.go:31] will retry after 3.460207847s: waiting for machine to come up
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:49.078185   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078646   73815 main.go:141] libmachine: (embed-certs-291295) Found IP for machine: 192.168.39.125
	I0818 20:07:49.078676   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has current primary IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078682   73815 main.go:141] libmachine: (embed-certs-291295) Reserving static IP address...
	I0818 20:07:49.079061   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.079091   73815 main.go:141] libmachine: (embed-certs-291295) Reserved static IP address: 192.168.39.125
	I0818 20:07:49.079112   73815 main.go:141] libmachine: (embed-certs-291295) DBG | skip adding static IP to network mk-embed-certs-291295 - found existing host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"}
	I0818 20:07:49.079132   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Getting to WaitForSSH function...
	I0818 20:07:49.079148   73815 main.go:141] libmachine: (embed-certs-291295) Waiting for SSH to be available...
	I0818 20:07:49.081287   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081592   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.081645   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081761   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH client type: external
	I0818 20:07:49.081788   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa (-rw-------)
	I0818 20:07:49.081823   73815 main.go:141] libmachine: (embed-certs-291295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:07:49.081841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | About to run SSH command:
	I0818 20:07:49.081854   73815 main.go:141] libmachine: (embed-certs-291295) DBG | exit 0
	I0818 20:07:49.207649   73815 main.go:141] libmachine: (embed-certs-291295) DBG | SSH cmd err, output: <nil>: 
	I0818 20:07:49.208007   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetConfigRaw
	I0818 20:07:49.208604   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.211088   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211436   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.211464   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211685   73815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:07:49.211906   73815 machine.go:93] provisionDockerMachine start ...
	I0818 20:07:49.211932   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:49.212156   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.214381   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214696   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.214722   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214838   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.215001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215139   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215264   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.215402   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.215637   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.215650   73815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:07:49.327972   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:07:49.328001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328234   73815 buildroot.go:166] provisioning hostname "embed-certs-291295"
	I0818 20:07:49.328286   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328495   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.331272   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331667   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.331695   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331795   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.331967   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332124   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332235   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.332387   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.332602   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.332620   73815 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-291295 && echo "embed-certs-291295" | sudo tee /etc/hostname
	I0818 20:07:49.457656   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-291295
	
	I0818 20:07:49.457692   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.460362   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460692   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.460724   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460821   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.461040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461269   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461419   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.461593   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.461791   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.461807   73815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-291295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-291295/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-291295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:07:49.580418   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:49.580448   73815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:07:49.580487   73815 buildroot.go:174] setting up certificates
	I0818 20:07:49.580501   73815 provision.go:84] configureAuth start
	I0818 20:07:49.580513   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.580787   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.583435   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.583801   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.583825   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.584097   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.586253   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586572   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.586606   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586700   73815 provision.go:143] copyHostCerts
	I0818 20:07:49.586764   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:07:49.586786   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:07:49.586863   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:07:49.586984   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:07:49.586994   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:07:49.587034   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:07:49.587134   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:07:49.587144   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:07:49.587182   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:07:49.587257   73815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-291295 san=[127.0.0.1 192.168.39.125 embed-certs-291295 localhost minikube]
	I0818 20:07:49.844689   73815 provision.go:177] copyRemoteCerts
	I0818 20:07:49.844745   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:07:49.844767   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.847172   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.847517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847700   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.847898   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.848060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.848210   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:49.933798   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:07:49.957958   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:07:49.981551   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:07:50.004238   73815 provision.go:87] duration metric: took 423.726052ms to configureAuth
	I0818 20:07:50.004263   73815 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:07:50.004431   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:07:50.004494   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.006759   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007031   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.007059   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007217   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.007437   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007603   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007729   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.007894   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.008058   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.008072   73815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:07:50.287001   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:07:50.287027   73815 machine.go:96] duration metric: took 1.075103653s to provisionDockerMachine
	I0818 20:07:50.287038   73815 start.go:293] postStartSetup for "embed-certs-291295" (driver="kvm2")
	I0818 20:07:50.287047   73815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:07:50.287067   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.287451   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:07:50.287478   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.290150   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290493   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.290515   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290727   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.290911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.291096   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.291233   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.379621   73815 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:07:50.388749   73815 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:07:50.388772   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:07:50.388844   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:07:50.388927   73815 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:07:50.389046   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:07:50.398957   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:50.422817   73815 start.go:296] duration metric: took 135.767247ms for postStartSetup
	I0818 20:07:50.422859   73815 fix.go:56] duration metric: took 17.598982329s for fixHost
	I0818 20:07:50.422886   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.425514   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.425899   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.425926   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.426113   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.426332   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426505   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426623   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.426798   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.427018   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.427033   73815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:07:50.540087   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011670.500173623
	
	I0818 20:07:50.540113   73815 fix.go:216] guest clock: 1724011670.500173623
	I0818 20:07:50.540122   73815 fix.go:229] Guest: 2024-08-18 20:07:50.500173623 +0000 UTC Remote: 2024-08-18 20:07:50.42286401 +0000 UTC m=+287.764343419 (delta=77.309613ms)
	I0818 20:07:50.540140   73815 fix.go:200] guest clock delta is within tolerance: 77.309613ms
	I0818 20:07:50.540145   73815 start.go:83] releasing machines lock for "embed-certs-291295", held for 17.716293127s
	I0818 20:07:50.540172   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.540462   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:50.543280   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543688   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.543721   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544386   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544639   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544698   73815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:07:50.544749   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.544889   73815 ssh_runner.go:195] Run: cat /version.json
	I0818 20:07:50.544913   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.547481   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547813   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.547841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547867   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547962   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548165   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548281   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.548307   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.548340   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548431   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548515   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.548576   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548701   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548874   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.628660   73815 ssh_runner.go:195] Run: systemctl --version
	I0818 20:07:50.653164   73815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:07:50.799158   73815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:07:50.805063   73815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:07:50.805134   73815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:07:50.820796   73815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:07:50.820822   73815 start.go:495] detecting cgroup driver to use...
	I0818 20:07:50.820901   73815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:07:50.837574   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:07:50.851913   73815 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:07:50.851981   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:07:50.865595   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:07:50.879240   73815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:07:50.990057   73815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:07:51.151540   73815 docker.go:233] disabling docker service ...
	I0818 20:07:51.151618   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:07:51.166231   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:07:51.180949   73815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:07:51.329174   73815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:07:51.460564   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:07:51.474929   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:07:51.494510   73815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:07:51.494573   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.507465   73815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:07:51.507533   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.519207   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.535742   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.551186   73815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:07:51.563233   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.574714   73815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.597948   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.609883   73815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:07:51.621040   73815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:07:51.621115   73815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:07:51.636305   73815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:07:51.646895   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:51.781890   73815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:07:51.927722   73815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:07:51.927799   73815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:07:51.932918   73815 start.go:563] Will wait 60s for crictl version
	I0818 20:07:51.933006   73815 ssh_runner.go:195] Run: which crictl
	I0818 20:07:51.936917   73815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:07:51.981063   73815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:07:51.981141   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.008566   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.041182   73815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:07:52.042348   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:52.045196   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045559   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:52.045588   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045764   73815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 20:07:52.050188   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:52.065105   73815 kubeadm.go:883] updating cluster {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:07:52.065244   73815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:07:52.065300   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:52.108608   73815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:07:52.108687   73815 ssh_runner.go:195] Run: which lz4
	I0818 20:07:52.112897   73815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:07:52.117388   73815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:07:52.117421   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:53.532527   73815 crio.go:462] duration metric: took 1.419668609s to copy over tarball
	I0818 20:07:53.532605   73815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:07:55.664780   73815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132141788s)
	I0818 20:07:55.664810   73815 crio.go:469] duration metric: took 2.132257968s to extract the tarball
	I0818 20:07:55.664820   73815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:07:55.702662   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:55.745782   73815 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:07:55.745801   73815 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:07:55.745809   73815 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0818 20:07:55.745921   73815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-291295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:07:55.745985   73815 ssh_runner.go:195] Run: crio config
	I0818 20:07:55.788458   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:07:55.788484   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:07:55.788503   73815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:07:55.788537   73815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-291295 NodeName:embed-certs-291295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:07:55.788723   73815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-291295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:07:55.788800   73815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:07:55.798787   73815 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:07:55.798860   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:07:55.808532   73815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0818 20:07:55.825731   73815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:07:55.842287   73815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0818 20:07:55.860058   73815 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0818 20:07:55.864007   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:55.876297   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:55.999076   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:07:56.015305   73815 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295 for IP: 192.168.39.125
	I0818 20:07:56.015325   73815 certs.go:194] generating shared ca certs ...
	I0818 20:07:56.015339   73815 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:07:56.015505   73815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:07:56.015548   73815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:07:56.015557   73815 certs.go:256] generating profile certs ...
	I0818 20:07:56.015633   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/client.key
	I0818 20:07:56.015689   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key.a8bddcfe
	I0818 20:07:56.015732   73815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key
	I0818 20:07:56.015846   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:07:56.015885   73815 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:07:56.015898   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:07:56.015953   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:07:56.015979   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:07:56.015999   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:07:56.016036   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:56.016660   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:07:56.044323   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:07:56.079231   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:07:56.111738   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:07:56.134817   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0818 20:07:56.160819   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:07:56.185806   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:07:56.210116   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:07:56.234185   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:07:56.256896   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:07:56.279505   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:07:56.302178   73815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:07:56.318931   73815 ssh_runner.go:195] Run: openssl version
	I0818 20:07:56.324865   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:07:56.336272   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340825   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340872   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.346515   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:07:56.357471   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:07:56.368211   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372600   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372662   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.378152   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:07:56.388868   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:07:56.399297   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403628   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403663   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.409041   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:07:56.419342   73815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:07:56.423757   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:07:56.429341   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:07:56.435012   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:07:56.440752   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:07:56.446305   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:07:56.452219   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:07:56.458004   73815 kubeadm.go:392] StartCluster: {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:07:56.458133   73815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:07:56.458181   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.495200   73815 cri.go:89] found id: ""
	I0818 20:07:56.495281   73815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:07:56.505834   73815 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:07:56.505854   73815 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:07:56.505903   73815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:07:56.516025   73815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:07:56.516962   73815 kubeconfig.go:125] found "embed-certs-291295" server: "https://192.168.39.125:8443"
	I0818 20:07:56.518789   73815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:07:56.528513   73815 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0818 20:07:56.528541   73815 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:07:56.528556   73815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:07:56.528612   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.568091   73815 cri.go:89] found id: ""
	I0818 20:07:56.568161   73815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:07:56.584012   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:07:56.593697   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:07:56.593712   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:07:56.593746   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:07:56.603071   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:07:56.603112   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:07:56.612422   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:07:56.621194   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:07:56.621243   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:07:56.630252   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.640086   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:07:56.640138   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.649323   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:07:56.658055   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:07:56.658110   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:07:56.667134   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:07:56.676460   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.783806   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.515850   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:07:57.710065   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.780213   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.854365   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:07:57.854458   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.355246   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.854602   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.355211   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.854991   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.354593   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.368818   73815 api_server.go:72] duration metric: took 2.514473789s to wait for apiserver process to appear ...
	I0818 20:08:00.368844   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:00.368866   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.832413   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:02.832449   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:02.832466   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.924768   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.924804   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:02.924820   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.929839   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.929869   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.369350   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.373766   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.373796   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.869333   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.874889   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.874919   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:04.369187   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:04.374739   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:08:04.383736   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:04.383764   73815 api_server.go:131] duration metric: took 4.014913233s to wait for apiserver health ...
	I0818 20:08:04.383773   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:08:04.383779   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:04.385486   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:04.386800   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:04.403476   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:04.422354   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:04.435181   73815 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:04.435222   73815 system_pods.go:61] "coredns-6f6b679f8f-wvd9k" [02369649-1565-437d-8b19-a67adfe13d45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:04.435237   73815 system_pods.go:61] "etcd-embed-certs-291295" [1e9f0b7d-bb65-4867-821e-b9af34338b3e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:04.435246   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [bb884a00-e058-4348-bc6a-427c64f4c68d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:04.435261   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [3a359998-cdb6-46ef-a018-e03e70cb33e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:04.435269   73815 system_pods.go:61] "kube-proxy-5fjm2" [bb15b1d9-8221-473a-b0c7-8c65b3b18bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 20:08:04.435276   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [4ed7725a-b0e6-4bc0-b0bd-913eb15fd4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:04.435287   73815 system_pods.go:61] "metrics-server-6867b74b74-g2kt7" [c23cc238-51f0-402c-a0c1-4aecc020d845] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:04.435294   73815 system_pods.go:61] "storage-provisioner" [2dcad3a1-15f0-41b9-8398-5a6e2d8763b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 20:08:04.435303   73815 system_pods.go:74] duration metric: took 12.928394ms to wait for pod list to return data ...
	I0818 20:08:04.435314   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:04.439127   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:04.439150   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:04.439161   73815 node_conditions.go:105] duration metric: took 3.84281ms to run NodePressure ...
	I0818 20:08:04.439176   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:04.720705   73815 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726814   73815 kubeadm.go:739] kubelet initialised
	I0818 20:08:04.726835   73815 kubeadm.go:740] duration metric: took 6.104356ms waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726843   73815 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:04.736000   73815 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.741473   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741509   73815 pod_ready.go:82] duration metric: took 5.472852ms for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.741523   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741534   73815 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.749841   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749872   73815 pod_ready.go:82] duration metric: took 8.326743ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.749883   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749891   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.756947   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.756997   73815 pod_ready.go:82] duration metric: took 7.079861ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.757011   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.757019   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.825829   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825865   73815 pod_ready.go:82] duration metric: took 68.834734ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.825878   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825888   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225761   73815 pod_ready.go:93] pod "kube-proxy-5fjm2" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:05.225786   73815 pod_ready.go:82] duration metric: took 399.888138ms for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225796   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:07.232250   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.744305   74485 start.go:364] duration metric: took 3m27.85511004s to acquireMachinesLock for "default-k8s-diff-port-852598"
	I0818 20:08:10.744365   74485 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:10.744384   74485 fix.go:54] fixHost starting: 
	I0818 20:08:10.744751   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:10.744791   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:10.764317   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0818 20:08:10.764799   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:10.765323   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:08:10.765349   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:10.765723   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:10.765929   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:10.766110   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:08:10.767735   74485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-852598: state=Stopped err=<nil>
	I0818 20:08:10.767763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	W0818 20:08:10.767931   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:10.770197   74485 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-852598" ...
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:09.234086   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:11.732954   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.771461   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Start
	I0818 20:08:10.771638   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring networks are active...
	I0818 20:08:10.772332   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network default is active
	I0818 20:08:10.772645   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network mk-default-k8s-diff-port-852598 is active
	I0818 20:08:10.773119   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Getting domain xml...
	I0818 20:08:10.773840   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Creating domain...
	I0818 20:08:12.058765   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting to get IP...
	I0818 20:08:12.059745   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060236   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.060152   75353 retry.go:31] will retry after 227.793826ms: waiting for machine to come up
	I0818 20:08:12.289622   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290038   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290061   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.290013   75353 retry.go:31] will retry after 288.501286ms: waiting for machine to come up
	I0818 20:08:12.580672   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581158   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.581120   75353 retry.go:31] will retry after 460.489481ms: waiting for machine to come up
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:13.736819   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:14.732761   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:14.732783   73815 pod_ready.go:82] duration metric: took 9.506980075s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:14.732792   73815 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:16.739855   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:13.042839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043444   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043475   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.043413   75353 retry.go:31] will retry after 542.076458ms: waiting for machine to come up
	I0818 20:08:13.586675   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587296   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587326   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.587216   75353 retry.go:31] will retry after 553.588704ms: waiting for machine to come up
	I0818 20:08:14.142076   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142714   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142737   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.142616   75353 retry.go:31] will retry after 852.179264ms: waiting for machine to come up
	I0818 20:08:14.996732   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997226   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997258   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.997175   75353 retry.go:31] will retry after 732.180291ms: waiting for machine to come up
	I0818 20:08:15.731247   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731741   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731771   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:15.731699   75353 retry.go:31] will retry after 1.456328641s: waiting for machine to come up
	I0818 20:08:17.189586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190071   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:17.189997   75353 retry.go:31] will retry after 1.632315907s: waiting for machine to come up
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:18.740885   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:21.239526   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:18.824458   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824827   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824859   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:18.824772   75353 retry.go:31] will retry after 2.077122736s: waiting for machine to come up
	I0818 20:08:20.903734   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904176   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904203   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:20.904139   75353 retry.go:31] will retry after 1.975638775s: waiting for machine to come up
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.239765   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:25.739127   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:22.882020   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882511   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:22.882450   75353 retry.go:31] will retry after 3.362090127s: waiting for machine to come up
	I0818 20:08:26.246148   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246523   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246547   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:26.246479   75353 retry.go:31] will retry after 3.188423251s: waiting for machine to come up
	I0818 20:08:30.732227   73711 start.go:364] duration metric: took 52.90798246s to acquireMachinesLock for "no-preload-944426"
	I0818 20:08:30.732291   73711 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:30.732302   73711 fix.go:54] fixHost starting: 
	I0818 20:08:30.732702   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:30.732738   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:30.749873   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0818 20:08:30.750371   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:30.750922   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:08:30.750951   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:30.751323   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:30.751547   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:30.751748   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:08:30.753437   73711 fix.go:112] recreateIfNeeded on no-preload-944426: state=Stopped err=<nil>
	I0818 20:08:30.753460   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	W0818 20:08:30.753623   73711 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:30.756026   73711 out.go:177] * Restarting existing kvm2 VM for "no-preload-944426" ...
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.438706   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439209   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Found IP for machine: 192.168.72.111
	I0818 20:08:29.439225   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserving static IP address...
	I0818 20:08:29.439241   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has current primary IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439712   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.439740   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | skip adding static IP to network mk-default-k8s-diff-port-852598 - found existing host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"}
	I0818 20:08:29.439754   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserved static IP address: 192.168.72.111
	I0818 20:08:29.439769   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for SSH to be available...
	I0818 20:08:29.439786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Getting to WaitForSSH function...
	I0818 20:08:29.442039   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442351   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.442378   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442515   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH client type: external
	I0818 20:08:29.442545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa (-rw-------)
	I0818 20:08:29.442569   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:29.442580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | About to run SSH command:
	I0818 20:08:29.442592   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | exit 0
	I0818 20:08:29.567586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:29.567935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetConfigRaw
	I0818 20:08:29.568553   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.570763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571150   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.571183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571367   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:08:29.571585   74485 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:29.571608   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:29.571839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.574102   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574560   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.574598   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574753   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.574920   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575219   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.575421   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.575610   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.575623   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:29.683677   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:29.683705   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.683980   74485 buildroot.go:166] provisioning hostname "default-k8s-diff-port-852598"
	I0818 20:08:29.684010   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.684210   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.687062   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687490   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.687518   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687656   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.687817   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.687954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.688105   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.688270   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.688444   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.688457   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-852598 && echo "default-k8s-diff-port-852598" | sudo tee /etc/hostname
	I0818 20:08:29.810790   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-852598
	
	I0818 20:08:29.810821   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.813448   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.813868   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.814159   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814322   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814457   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.814613   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.814821   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.814847   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-852598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-852598/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-852598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:29.934730   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:29.934762   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:29.934818   74485 buildroot.go:174] setting up certificates
	I0818 20:08:29.934834   74485 provision.go:84] configureAuth start
	I0818 20:08:29.934848   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.935133   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.938004   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938365   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.938385   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938612   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.940910   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941267   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.941298   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941376   74485 provision.go:143] copyHostCerts
	I0818 20:08:29.941429   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:29.941446   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:29.941498   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:29.941583   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:29.941591   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:29.941609   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:29.941657   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:29.941664   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:29.941683   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:29.941726   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-852598 san=[127.0.0.1 192.168.72.111 default-k8s-diff-port-852598 localhost minikube]
	I0818 20:08:30.047223   74485 provision.go:177] copyRemoteCerts
	I0818 20:08:30.047284   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:30.047310   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.049891   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050165   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.050195   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050394   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.050580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.050750   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.050910   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.133873   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:30.158887   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0818 20:08:30.183930   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 20:08:30.208851   74485 provision.go:87] duration metric: took 274.002401ms to configureAuth
	I0818 20:08:30.208888   74485 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:30.209075   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:30.209144   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.211913   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212274   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.212305   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212521   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.212718   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.212897   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.213060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.213313   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.213531   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.213564   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:30.490496   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:30.490524   74485 machine.go:96] duration metric: took 918.924484ms to provisionDockerMachine
	I0818 20:08:30.490541   74485 start.go:293] postStartSetup for "default-k8s-diff-port-852598" (driver="kvm2")
	I0818 20:08:30.490555   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:30.490576   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.490879   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:30.490904   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.493538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.493863   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.493894   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.494015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.494211   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.494367   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.494513   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.582020   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:30.586488   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:30.586510   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:30.586568   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:30.586656   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:30.586743   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:30.595907   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:30.619808   74485 start.go:296] duration metric: took 129.254668ms for postStartSetup
	I0818 20:08:30.619842   74485 fix.go:56] duration metric: took 19.875457987s for fixHost
	I0818 20:08:30.619861   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.622487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622802   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.622836   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.623181   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623338   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623489   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.623663   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.623819   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.623829   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:30.732011   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011710.692571104
	
	I0818 20:08:30.732033   74485 fix.go:216] guest clock: 1724011710.692571104
	I0818 20:08:30.732040   74485 fix.go:229] Guest: 2024-08-18 20:08:30.692571104 +0000 UTC Remote: 2024-08-18 20:08:30.619845545 +0000 UTC m=+227.865652589 (delta=72.725559ms)
	I0818 20:08:30.732088   74485 fix.go:200] guest clock delta is within tolerance: 72.725559ms
	I0818 20:08:30.732098   74485 start.go:83] releasing machines lock for "default-k8s-diff-port-852598", held for 19.987759602s
	I0818 20:08:30.732126   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.732380   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:30.735249   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735696   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.735724   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735987   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736665   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736886   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736961   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:30.737002   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.737212   74485 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:30.737240   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.740016   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740246   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740447   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740470   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740646   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740650   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740739   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740949   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740956   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741415   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741427   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741608   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.741699   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.821128   74485 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:30.848919   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:30.997885   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:31.004578   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:31.004656   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:31.023770   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:31.023801   74485 start.go:495] detecting cgroup driver to use...
	I0818 20:08:31.023873   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:31.040507   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:31.054848   74485 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:31.054901   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:31.069584   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:31.089532   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:31.214560   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:31.394507   74485 docker.go:233] disabling docker service ...
	I0818 20:08:31.394571   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:31.411295   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:31.427312   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:31.547148   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:31.669942   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:31.686214   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:31.711412   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:31.711474   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.723281   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:31.723346   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.735488   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.748029   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.762456   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:31.779045   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.793816   74485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.816892   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.829236   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:31.842943   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:31.843000   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:31.858422   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:31.870179   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:32.003783   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:32.160300   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:32.160368   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:32.165424   74485 start.go:563] Will wait 60s for crictl version
	I0818 20:08:32.165472   74485 ssh_runner.go:195] Run: which crictl
	I0818 20:08:32.169268   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:32.211667   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:32.211758   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.242366   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.272343   74485 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:27.739698   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:30.239242   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.240089   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.273652   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:32.277017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277362   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:32.277395   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277654   74485 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:32.282225   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:32.306870   74485 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:32.306980   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:32.307040   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:32.350393   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:32.350473   74485 ssh_runner.go:195] Run: which lz4
	I0818 20:08:32.355129   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:32.359816   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:32.359839   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:08:30.757329   73711 main.go:141] libmachine: (no-preload-944426) Calling .Start
	I0818 20:08:30.757514   73711 main.go:141] libmachine: (no-preload-944426) Ensuring networks are active...
	I0818 20:08:30.758286   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network default is active
	I0818 20:08:30.758667   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network mk-no-preload-944426 is active
	I0818 20:08:30.759084   73711 main.go:141] libmachine: (no-preload-944426) Getting domain xml...
	I0818 20:08:30.759889   73711 main.go:141] libmachine: (no-preload-944426) Creating domain...
	I0818 20:08:32.064235   73711 main.go:141] libmachine: (no-preload-944426) Waiting to get IP...
	I0818 20:08:32.065149   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.065617   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.065693   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.065614   75550 retry.go:31] will retry after 223.046315ms: waiting for machine to come up
	I0818 20:08:32.290000   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.290486   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.290517   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.290460   75550 retry.go:31] will retry after 359.595476ms: waiting for machine to come up
	I0818 20:08:32.652293   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.652922   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.652953   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.652891   75550 retry.go:31] will retry after 355.131428ms: waiting for machine to come up
	I0818 20:08:33.009174   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.009664   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.009692   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.009620   75550 retry.go:31] will retry after 433.765107ms: waiting for machine to come up
	I0818 20:08:33.445297   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.446028   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.446057   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.446005   75550 retry.go:31] will retry after 547.853366ms: waiting for machine to come up
	I0818 20:08:33.995808   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.996537   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.996569   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.996500   75550 retry.go:31] will retry after 830.882652ms: waiting for machine to come up
	I0818 20:08:34.828636   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:34.829139   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:34.829169   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:34.829088   75550 retry.go:31] will retry after 1.034176215s: waiting for machine to come up
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.240826   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:36.740440   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:33.831827   74485 crio.go:462] duration metric: took 1.476738272s to copy over tarball
	I0818 20:08:33.831892   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:36.080107   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.24818669s)
	I0818 20:08:36.080141   74485 crio.go:469] duration metric: took 2.248285769s to extract the tarball
	I0818 20:08:36.080159   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:36.120912   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:36.170431   74485 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:08:36.170455   74485 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:08:36.170463   74485 kubeadm.go:934] updating node { 192.168.72.111 8444 v1.31.0 crio true true} ...
	I0818 20:08:36.170563   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-852598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:36.170628   74485 ssh_runner.go:195] Run: crio config
	I0818 20:08:36.215464   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:36.215491   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:36.215504   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:36.215528   74485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-852598 NodeName:default-k8s-diff-port-852598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:08:36.215652   74485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-852598"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:36.215718   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:08:36.227163   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:36.227254   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:36.237577   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0818 20:08:36.254898   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:36.273530   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0818 20:08:36.290824   74485 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:36.294542   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:36.306822   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:36.443673   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:36.461205   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598 for IP: 192.168.72.111
	I0818 20:08:36.461232   74485 certs.go:194] generating shared ca certs ...
	I0818 20:08:36.461252   74485 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:36.461420   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:36.461492   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:36.461505   74485 certs.go:256] generating profile certs ...
	I0818 20:08:36.461621   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/client.key
	I0818 20:08:36.461717   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key.44a0f5ad
	I0818 20:08:36.461783   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key
	I0818 20:08:36.461930   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:36.461983   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:36.461998   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:36.462026   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:36.462077   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:36.462112   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:36.462167   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:36.462916   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:36.512610   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:36.558616   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:36.595755   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:36.638264   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 20:08:36.669336   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:08:36.692480   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:36.717235   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:08:36.742220   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:36.765505   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:36.789279   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:36.813777   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:36.831256   74485 ssh_runner.go:195] Run: openssl version
	I0818 20:08:36.837184   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:36.848123   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853030   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853089   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.859016   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:36.871084   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:36.882581   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.888943   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.889008   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.896841   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:36.911762   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:36.923029   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.927982   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.928039   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.934165   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:36.946794   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:36.951686   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:36.957905   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:36.964071   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:36.970369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:36.976369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:36.982386   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:36.988286   74485 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:36.988382   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:36.988433   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.036383   74485 cri.go:89] found id: ""
	I0818 20:08:37.036472   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:37.047135   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:37.047159   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:37.047204   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:37.058133   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:37.059236   74485 kubeconfig.go:125] found "default-k8s-diff-port-852598" server: "https://192.168.72.111:8444"
	I0818 20:08:37.061368   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:37.072922   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0818 20:08:37.072961   74485 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:37.072975   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:37.073035   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.120622   74485 cri.go:89] found id: ""
	I0818 20:08:37.120713   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:37.138564   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:37.149091   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:37.149114   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:37.149167   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:08:37.160298   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:37.160364   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:37.170717   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:08:37.180261   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:37.180337   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:37.190466   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.200331   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:37.200407   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.210729   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:08:37.220302   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:37.220379   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:37.230616   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:37.241303   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:37.365964   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:35.865644   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:35.866148   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:35.866176   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:35.866094   75550 retry.go:31] will retry after 1.30047863s: waiting for machine to come up
	I0818 20:08:37.168446   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:37.168947   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:37.168985   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:37.168886   75550 retry.go:31] will retry after 1.143148547s: waiting for machine to come up
	I0818 20:08:38.314142   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:38.314622   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:38.314645   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:38.314568   75550 retry.go:31] will retry after 2.106630797s: waiting for machine to come up
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.240817   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:41.741780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:38.322305   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.523945   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.627637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.794218   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:38.794298   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.295075   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.795095   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.810749   74485 api_server.go:72] duration metric: took 1.016560665s to wait for apiserver process to appear ...
	I0818 20:08:39.810778   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:39.810802   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:39.811324   74485 api_server.go:269] stopped: https://192.168.72.111:8444/healthz: Get "https://192.168.72.111:8444/healthz": dial tcp 192.168.72.111:8444: connect: connection refused
	I0818 20:08:40.311081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.309160   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:42.309190   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:42.309206   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.364083   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.364123   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:42.364148   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.370890   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.370918   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:40.423364   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:40.423886   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:40.423909   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:40.423851   75550 retry.go:31] will retry after 2.350918177s: waiting for machine to come up
	I0818 20:08:42.776801   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:42.777407   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:42.777440   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:42.777361   75550 retry.go:31] will retry after 3.529824243s: waiting for machine to come up
	I0818 20:08:42.815322   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.823702   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.823738   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.311540   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.317503   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.317537   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.810955   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.816976   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.817005   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.311718   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.316009   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.316038   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.811634   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.816069   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.816095   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.311732   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.317099   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:45.317122   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.811063   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.815319   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:08:45.821699   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:45.821728   74485 api_server.go:131] duration metric: took 6.010942001s to wait for apiserver health ...
	I0818 20:08:45.821739   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:45.821774   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:45.823968   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.239827   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:46.240539   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:45.825235   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:45.836398   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:45.854746   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:45.866305   74485 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:45.866335   74485 system_pods.go:61] "coredns-6f6b679f8f-zfdn9" [8ed412a0-912d-4619-a2d8-2378f921037b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:45.866344   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [efa18356-f8dd-4fe4-acc6-59f859e7becf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:45.866351   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [b92f2056-c5b6-4a2f-8519-a83b2350866f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:45.866359   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [7eb6a474-891d-442e-bd85-4ca766312f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:45.866365   74485 system_pods.go:61] "kube-proxy-h8bpj" [472e231d-df71-44d6-8873-23d7e43d43d2] Running
	I0818 20:08:45.866375   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [43dccb14-0125-4d48-9537-8a87c865b586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:45.866381   74485 system_pods.go:61] "metrics-server-6867b74b74-brqj6" [de1c0894-2b42-4728-bf63-bea36c5aa0d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:45.866387   74485 system_pods.go:61] "storage-provisioner" [41499d9e-d3cf-4dbc-9464-998a1f2c6186] Running
	I0818 20:08:45.866395   74485 system_pods.go:74] duration metric: took 11.62616ms to wait for pod list to return data ...
	I0818 20:08:45.866411   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:45.870540   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:45.870564   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:45.870578   74485 node_conditions.go:105] duration metric: took 4.15805ms to run NodePressure ...
	I0818 20:08:45.870597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:46.138555   74485 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142738   74485 kubeadm.go:739] kubelet initialised
	I0818 20:08:46.142758   74485 kubeadm.go:740] duration metric: took 4.173219ms waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142765   74485 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:46.147199   74485 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.151726   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151751   74485 pod_ready.go:82] duration metric: took 4.528706ms for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.151762   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151770   74485 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.155962   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.155984   74485 pod_ready.go:82] duration metric: took 4.203038ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.155996   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.156002   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.159739   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159759   74485 pod_ready.go:82] duration metric: took 3.749616ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.159769   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159777   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.309056   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:46.309441   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:46.309470   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:46.309395   75550 retry.go:31] will retry after 3.741295193s: waiting for machine to come up
	I0818 20:08:50.052617   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053049   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has current primary IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053070   73711 main.go:141] libmachine: (no-preload-944426) Found IP for machine: 192.168.61.228
	I0818 20:08:50.053083   73711 main.go:141] libmachine: (no-preload-944426) Reserving static IP address...
	I0818 20:08:50.053446   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.053467   73711 main.go:141] libmachine: (no-preload-944426) Reserved static IP address: 192.168.61.228
	I0818 20:08:50.053484   73711 main.go:141] libmachine: (no-preload-944426) DBG | skip adding static IP to network mk-no-preload-944426 - found existing host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"}
	I0818 20:08:50.053498   73711 main.go:141] libmachine: (no-preload-944426) DBG | Getting to WaitForSSH function...
	I0818 20:08:50.053510   73711 main.go:141] libmachine: (no-preload-944426) Waiting for SSH to be available...
	I0818 20:08:50.055459   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055790   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.055822   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055911   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH client type: external
	I0818 20:08:50.055939   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa (-rw-------)
	I0818 20:08:50.055971   73711 main.go:141] libmachine: (no-preload-944426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:50.055986   73711 main.go:141] libmachine: (no-preload-944426) DBG | About to run SSH command:
	I0818 20:08:50.055998   73711 main.go:141] libmachine: (no-preload-944426) DBG | exit 0
	I0818 20:08:50.175717   73711 main.go:141] libmachine: (no-preload-944426) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:50.176077   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetConfigRaw
	I0818 20:08:50.176705   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.179072   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179455   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.179486   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179712   73711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:08:50.179900   73711 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:50.179923   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:50.180128   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.182300   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182679   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.182707   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182822   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.183009   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183138   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183292   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.183455   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.183613   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.183623   73711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.739015   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.741282   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:48.165270   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.166500   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:52.667585   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.284037   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:50.284069   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284354   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:08:50.284383   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284503   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.287412   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287774   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.287814   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287965   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.288164   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288352   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288509   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.288669   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.288869   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.288889   73711 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944426 && echo "no-preload-944426" | sudo tee /etc/hostname
	I0818 20:08:50.407844   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944426
	
	I0818 20:08:50.407877   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.410740   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411115   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.411156   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411402   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.411612   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411760   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411869   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.412073   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.412277   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.412299   73711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944426/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:50.521359   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:50.521388   73711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:50.521456   73711 buildroot.go:174] setting up certificates
	I0818 20:08:50.521467   73711 provision.go:84] configureAuth start
	I0818 20:08:50.521481   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.521824   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.524572   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.524975   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.525002   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.525211   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.527350   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527669   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.527697   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527790   73711 provision.go:143] copyHostCerts
	I0818 20:08:50.527856   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:50.527872   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:50.527924   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:50.528038   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:50.528047   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:50.528065   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:50.528119   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:50.528126   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:50.528143   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:50.528192   73711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.no-preload-944426 san=[127.0.0.1 192.168.61.228 localhost minikube no-preload-944426]
	I0818 20:08:50.740892   73711 provision.go:177] copyRemoteCerts
	I0818 20:08:50.740964   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:50.740991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.743676   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744029   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.744059   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744260   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.744494   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.744681   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.744848   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:50.826364   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:50.858459   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:08:50.890910   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:50.918703   73711 provision.go:87] duration metric: took 397.222917ms to configureAuth
	I0818 20:08:50.918730   73711 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:50.918947   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:50.919029   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.922219   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922549   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.922573   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922762   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.922991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923166   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923300   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.923475   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.923683   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.923700   73711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:51.193561   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:51.193588   73711 machine.go:96] duration metric: took 1.013672792s to provisionDockerMachine
	I0818 20:08:51.193603   73711 start.go:293] postStartSetup for "no-preload-944426" (driver="kvm2")
	I0818 20:08:51.193616   73711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:51.193660   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.194032   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:51.194060   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.196422   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196712   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.196747   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196900   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.197046   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.197157   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.197325   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.279007   73711 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:51.283324   73711 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:51.283344   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:51.283424   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:51.283524   73711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:51.283641   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:51.293489   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:51.317415   73711 start.go:296] duration metric: took 123.797891ms for postStartSetup
	I0818 20:08:51.317455   73711 fix.go:56] duration metric: took 20.58515233s for fixHost
	I0818 20:08:51.317479   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.320161   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320452   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.320481   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320667   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.320853   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321027   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321171   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.321322   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:51.321505   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:51.321517   73711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:51.420193   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011731.395088538
	
	I0818 20:08:51.420216   73711 fix.go:216] guest clock: 1724011731.395088538
	I0818 20:08:51.420223   73711 fix.go:229] Guest: 2024-08-18 20:08:51.395088538 +0000 UTC Remote: 2024-08-18 20:08:51.317459873 +0000 UTC m=+356.082724848 (delta=77.628665ms)
	I0818 20:08:51.420240   73711 fix.go:200] guest clock delta is within tolerance: 77.628665ms
	I0818 20:08:51.420256   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 20.687989837s
	I0818 20:08:51.420273   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.420534   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:51.423567   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.423861   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.423888   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.424052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424528   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424690   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424777   73711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:51.424825   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.424916   73711 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:51.424945   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.427482   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427714   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427786   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.427813   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427962   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428080   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.428109   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.428146   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428283   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428342   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428441   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428532   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.428600   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428707   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.528038   73711 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:51.534231   73711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:51.683823   73711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:51.690823   73711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:51.690901   73711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:51.707356   73711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:51.707389   73711 start.go:495] detecting cgroup driver to use...
	I0818 20:08:51.707459   73711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:51.723884   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:51.737661   73711 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:51.737715   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:51.751187   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:51.764367   73711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:51.881664   73711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:52.022183   73711 docker.go:233] disabling docker service ...
	I0818 20:08:52.022250   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:52.037108   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:52.050404   73711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:52.190167   73711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:52.325569   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:52.339546   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:52.358427   73711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:52.358487   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.369570   73711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:52.369629   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.382786   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.396845   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.407797   73711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:52.418649   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.428822   73711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.445799   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.455730   73711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:52.464898   73711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:52.464951   73711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:52.477249   73711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:52.487204   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:52.608922   73711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:52.753849   73711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:52.753918   73711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:52.759116   73711 start.go:563] Will wait 60s for crictl version
	I0818 20:08:52.759175   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:52.763674   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:52.806016   73711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:52.806106   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.833670   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.864310   73711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:52.865447   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:52.868265   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868667   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:52.868699   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868900   73711 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:52.873656   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:52.887328   73711 kubeadm.go:883] updating cluster {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:52.887505   73711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:52.887553   73711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:52.923999   73711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:52.924025   73711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:52.924090   73711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.924097   73711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.924113   73711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.924147   73711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.924216   73711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.924239   73711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:52.924305   73711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.924390   73711 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.925984   73711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.926002   73711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.925994   73711 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 20:08:52.926011   73711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.926053   73711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.926291   73711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.117679   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157566   73711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0818 20:08:53.157608   73711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157655   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.158464   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.161938   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217317   73711 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0818 20:08:53.217374   73711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.217419   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217427   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.229954   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0818 20:08:53.253154   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.253209   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.261450   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.269598   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.270354   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.270401   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.421994   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 20:08:53.422048   73711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0818 20:08:53.422139   73711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.422182   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.422195   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.422052   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.446061   73711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0818 20:08:53.446101   73711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.446100   73711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0818 20:08:53.446114   73711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0818 20:08:53.446158   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446201   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.446161   73711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.446130   73711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.446250   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446280   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.474921   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.474936   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0818 20:08:53.474953   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474995   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474999   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.505782   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.505904   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.505934   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.799739   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.740857   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.167350   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.168652   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.666744   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.666779   74485 pod_ready.go:82] duration metric: took 11.506987195s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.666802   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671280   74485 pod_ready.go:93] pod "kube-proxy-h8bpj" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.671302   74485 pod_ready.go:82] duration metric: took 4.49242ms for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671311   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675745   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.675765   74485 pod_ready.go:82] duration metric: took 4.446707ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675779   74485 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:55.497054   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.022032642s)
	I0818 20:08:55.497090   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0818 20:08:55.497116   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.022155942s)
	I0818 20:08:55.497157   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.022131358s)
	I0818 20:08:55.497168   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0818 20:08:55.497227   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.497273   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.497313   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.991355489s)
	I0818 20:08:55.497274   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.991406662s)
	I0818 20:08:55.497362   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.497369   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.497393   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (1.991466215s)
	I0818 20:08:55.497409   73711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.697646009s)
	I0818 20:08:55.497439   73711 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0818 20:08:55.497455   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:55.497468   73711 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.497504   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:55.590490   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.608567   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.608583   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.608658   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 20:08:55.608707   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.608728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0818 20:08:55.608741   73711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.608756   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:55.608768   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.660747   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 20:08:55.660856   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:08:55.701347   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 20:08:55.701376   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.701433   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:08:55.717056   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 20:08:55.717159   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:08:59.680640   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.071854332s)
	I0818 20:08:59.680673   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0818 20:08:59.680700   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (4.071919945s)
	I0818 20:08:59.680728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0818 20:08:59.680739   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680755   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (4.019877135s)
	I0818 20:08:59.680781   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0818 20:08:59.680792   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.97939667s)
	I0818 20:08:59.680802   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680818   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (3.979373996s)
	I0818 20:08:59.680833   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0818 20:08:59.680847   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:59.680876   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (3.96370085s)
	I0818 20:08:59.680895   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.241463   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:00.241492   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:59.683057   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:02.183113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:01.753708   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.072881673s)
	I0818 20:09:01.753739   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.072859667s)
	I0818 20:09:01.753786   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 20:09:01.753747   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0818 20:09:01.753866   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:01.753870   73711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:01.753922   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:03.515107   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.761161853s)
	I0818 20:09:03.515136   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0818 20:09:03.515142   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.761255334s)
	I0818 20:09:03.515162   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:03.515170   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0818 20:09:03.515223   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.741235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.910002   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.239901   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.682962   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.183678   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:05.463531   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.948279133s)
	I0818 20:09:05.463559   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0818 20:09:05.463585   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:05.463629   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:07.525332   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.061676855s)
	I0818 20:09:07.525365   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0818 20:09:07.525401   73711 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:07.525473   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:08.178855   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 20:09:08.178894   73711 cache_images.go:123] Successfully loaded all cached images
	I0818 20:09:08.178900   73711 cache_images.go:92] duration metric: took 15.254860831s to LoadCachedImages
	I0818 20:09:08.178915   73711 kubeadm.go:934] updating node { 192.168.61.228 8443 v1.31.0 crio true true} ...
	I0818 20:09:08.179070   73711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:09:08.179163   73711 ssh_runner.go:195] Run: crio config
	I0818 20:09:08.229392   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:08.229418   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:08.229429   73711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:09:08.229453   73711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.228 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944426 NodeName:no-preload-944426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:09:08.229598   73711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944426"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:09:08.229657   73711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:09:08.240023   73711 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:09:08.240121   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:09:08.249808   73711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 20:09:08.266663   73711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:09:08.284042   73711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0818 20:09:08.302210   73711 ssh_runner.go:195] Run: grep 192.168.61.228	control-plane.minikube.internal$ /etc/hosts
	I0818 20:09:08.306321   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:09:08.318674   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:08.437701   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:08.462861   73711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426 for IP: 192.168.61.228
	I0818 20:09:08.462889   73711 certs.go:194] generating shared ca certs ...
	I0818 20:09:08.462909   73711 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:08.463099   73711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:09:08.463166   73711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:09:08.463178   73711 certs.go:256] generating profile certs ...
	I0818 20:09:08.463297   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/client.key
	I0818 20:09:08.463400   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key.ec9e396f
	I0818 20:09:08.463459   73711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key
	I0818 20:09:08.463622   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:09:08.463663   73711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:09:08.463676   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:09:08.463718   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:09:08.463748   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:09:08.463780   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:09:08.463827   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:09:08.464500   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:09:08.497860   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:09:08.550536   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:09:08.593972   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:09:08.625691   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 20:09:08.652285   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:09:08.676175   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:09:08.703870   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:09:08.729102   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:09:08.758017   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:09:08.783528   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:09:08.808211   73711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:09:08.825465   73711 ssh_runner.go:195] Run: openssl version
	I0818 20:09:08.831856   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:09:08.843336   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847774   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847824   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.854110   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:09:08.865279   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:09:08.876107   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880723   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880786   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.886526   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:09:08.898139   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:09:08.909258   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.913957   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.914015   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.919888   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:09:08.933118   73711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:09:08.937979   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:09:08.944427   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:09:08.950686   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:09:08.956949   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:09:08.963201   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:09:08.969284   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:09:08.975411   73711 kubeadm.go:392] StartCluster: {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:09:08.975501   73711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:09:08.975543   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.019794   73711 cri.go:89] found id: ""
	I0818 20:09:09.019859   73711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:09:09.030614   73711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:09:09.030635   73711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:09:09.030689   73711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:09:09.041513   73711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:09:09.042532   73711 kubeconfig.go:125] found "no-preload-944426" server: "https://192.168.61.228:8443"
	I0818 20:09:09.044606   73711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:09:09.054823   73711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.228
	I0818 20:09:09.054855   73711 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:09:09.054867   73711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:09:09.054919   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.096324   73711 cri.go:89] found id: ""
	I0818 20:09:09.096412   73711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:09:09.112752   73711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:09:09.122515   73711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:09:09.122537   73711 kubeadm.go:157] found existing configuration files:
	
	I0818 20:09:09.122578   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:09:09.131551   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:09:09.131604   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:09:09.140888   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:09:09.149865   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:09:09.149920   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:09:09.159008   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.168220   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:09:09.168279   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.177638   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:09:09.187508   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:09:09.187567   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:09:09.196657   73711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:09:09.206117   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:09.331465   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.242594   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.738983   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:09.682305   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.683106   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:10.574796   73711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243293266s)
	I0818 20:09:10.574822   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.778850   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.843088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.931752   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:09:10.931846   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.432245   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.932577   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.948423   73711 api_server.go:72] duration metric: took 1.016687944s to wait for apiserver process to appear ...
	I0818 20:09:11.948449   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:09:11.948477   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:11.948946   73711 api_server.go:269] stopped: https://192.168.61.228:8443/healthz: Get "https://192.168.61.228:8443/healthz": dial tcp 192.168.61.228:8443: connect: connection refused
	I0818 20:09:12.448725   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.739963   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.739993   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.740010   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.750388   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.750411   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.948679   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.956174   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:14.956205   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.449273   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.453840   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.453870   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:15.949138   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.958790   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.958813   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:16.449521   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:16.453975   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:09:16.460298   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:09:16.460323   73711 api_server.go:131] duration metric: took 4.511867816s to wait for apiserver health ...
	I0818 20:09:16.460330   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:16.460339   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:16.462141   73711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:09:13.740020   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.238126   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:13.683910   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.182408   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.463457   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:09:16.474867   73711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:09:16.494479   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:09:16.502870   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:09:16.502898   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:09:16.502906   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:09:16.502917   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:09:16.502926   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:09:16.502937   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:09:16.502951   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:09:16.502959   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:09:16.502964   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:09:16.502970   73711 system_pods.go:74] duration metric: took 8.468743ms to wait for pod list to return data ...
	I0818 20:09:16.502977   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:09:16.507863   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:09:16.507884   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:09:16.507893   73711 node_conditions.go:105] duration metric: took 4.912203ms to run NodePressure ...
	I0818 20:09:16.507907   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:16.779765   73711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790746   73711 kubeadm.go:739] kubelet initialised
	I0818 20:09:16.790771   73711 kubeadm.go:740] duration metric: took 10.982299ms waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790780   73711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:16.799544   73711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.806805   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806826   73711 pod_ready.go:82] duration metric: took 7.251632ms for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.806835   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806841   73711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.813614   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813646   73711 pod_ready.go:82] duration metric: took 6.794013ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.813656   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813664   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.818982   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819016   73711 pod_ready.go:82] duration metric: took 5.338981ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.819028   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819037   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.898401   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898433   73711 pod_ready.go:82] duration metric: took 79.37927ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.898446   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898454   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.297663   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297697   73711 pod_ready.go:82] duration metric: took 399.23365ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.297706   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297712   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.697884   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697909   73711 pod_ready.go:82] duration metric: took 400.191092ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.697919   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697925   73711 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:18.099008   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099034   73711 pod_ready.go:82] duration metric: took 401.09908ms for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:18.099044   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099050   73711 pod_ready.go:39] duration metric: took 1.30825923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:18.099071   73711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:09:18.111862   73711 ops.go:34] apiserver oom_adj: -16
	I0818 20:09:18.111888   73711 kubeadm.go:597] duration metric: took 9.081245207s to restartPrimaryControlPlane
	I0818 20:09:18.111901   73711 kubeadm.go:394] duration metric: took 9.136525478s to StartCluster
	I0818 20:09:18.111931   73711 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.112017   73711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:09:18.114460   73711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.114771   73711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:09:18.114885   73711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:09:18.114987   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:09:18.115022   73711 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944426"
	I0818 20:09:18.115036   73711 addons.go:69] Setting default-storageclass=true in profile "no-preload-944426"
	I0818 20:09:18.115059   73711 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944426"
	I0818 20:09:18.115075   73711 addons.go:69] Setting metrics-server=true in profile "no-preload-944426"
	W0818 20:09:18.115082   73711 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:09:18.115095   73711 addons.go:234] Setting addon metrics-server=true in "no-preload-944426"
	I0818 20:09:18.115067   73711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944426"
	W0818 20:09:18.115104   73711 addons.go:243] addon metrics-server should already be in state true
	I0818 20:09:18.115122   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115132   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115517   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115530   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115541   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115553   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115560   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115592   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.117511   73711 out.go:177] * Verifying Kubernetes components...
	I0818 20:09:18.118740   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:18.133596   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0818 20:09:18.134093   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.134661   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.134685   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.135066   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.135263   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.136138   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0818 20:09:18.136520   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.136981   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.137004   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.137353   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.137911   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.137957   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.138952   73711 addons.go:234] Setting addon default-storageclass=true in "no-preload-944426"
	W0818 20:09:18.138975   73711 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:09:18.139001   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.139356   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.139413   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.155618   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0818 20:09:18.156076   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.156666   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.156687   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.157086   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.157669   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.157700   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.158080   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0818 20:09:18.158422   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.158850   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.158868   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.158888   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0818 20:09:18.159237   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.159282   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.159455   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.159741   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.159763   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.160108   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.160582   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.160606   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.165108   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.166977   73711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:09:18.168139   73711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.168156   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:09:18.168174   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.171426   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172004   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.172041   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172082   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.172238   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.172336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.172423   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.175961   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0818 20:09:18.176421   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.176543   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0818 20:09:18.176861   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.176875   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.177065   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.177176   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.177345   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.177745   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.177762   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.178162   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.178336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.179445   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180238   73711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.180253   73711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:09:18.180275   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.181198   73711 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:09:18.182420   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:09:18.182447   73711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:09:18.182464   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.183457   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183499   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.183513   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183656   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.183820   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.183953   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.184112   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.185260   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185575   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.185588   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185754   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.185879   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.186013   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.186099   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.338778   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:18.356229   73711 node_ready.go:35] waiting up to 6m0s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:18.496927   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:09:18.496949   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:09:18.513205   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.540482   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:09:18.540505   73711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:09:18.544078   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.613315   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:18.613340   73711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:09:18.668416   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:19.638171   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094064475s)
	I0818 20:09:19.638274   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638299   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638177   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124933278s)
	I0818 20:09:19.638328   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638343   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638281   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638412   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638697   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638714   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638724   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638732   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638825   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638845   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638853   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638932   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638946   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638966   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638994   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639006   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638893   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.639016   73711 addons.go:475] Verifying addon metrics-server=true in "no-preload-944426"
	I0818 20:09:19.639024   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.639227   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.639401   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639416   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.640889   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.640905   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.640973   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647148   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.647169   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.647416   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.647460   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647448   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.649397   73711 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0818 20:09:19.650643   73711 addons.go:510] duration metric: took 1.535758897s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:18.240003   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.738804   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:18.184836   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.682274   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.360319   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:22.860498   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:22.739478   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.238989   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.239357   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:23.181992   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.182417   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.183071   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.360413   73711 node_ready.go:49] node "no-preload-944426" has status "Ready":"True"
	I0818 20:09:25.360449   73711 node_ready.go:38] duration metric: took 7.004187421s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:25.360462   73711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:25.366498   73711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:27.373766   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.873098   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:29.738977   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.238945   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.683136   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.182825   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:31.873262   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.372828   73711 pod_ready.go:93] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.372849   73711 pod_ready.go:82] duration metric: took 7.006326702s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.372858   73711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376709   73711 pod_ready.go:93] pod "etcd-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.376732   73711 pod_ready.go:82] duration metric: took 3.867173ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376743   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380703   73711 pod_ready.go:93] pod "kube-apiserver-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.380722   73711 pod_ready.go:82] duration metric: took 3.970732ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380733   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385137   73711 pod_ready.go:93] pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.385159   73711 pod_ready.go:82] duration metric: took 4.417483ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385171   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390646   73711 pod_ready.go:93] pod "kube-proxy-2l6g8" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.390702   73711 pod_ready.go:82] duration metric: took 5.522399ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390713   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772352   73711 pod_ready.go:93] pod "kube-scheduler-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.772374   73711 pod_ready.go:82] duration metric: took 381.654122ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772384   73711 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:34.779615   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:34.239095   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.738310   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:34.683291   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:37.181676   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.780369   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.278364   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:38.740139   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.238400   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.181867   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.182275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.781697   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:43.239274   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.739280   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.182744   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.684210   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:46.278261   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.279139   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:47.739502   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.239148   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.181581   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.181939   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.182295   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.779145   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.779195   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.779440   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:52.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:55.239445   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.682549   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.182480   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.278685   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.279456   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.739205   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.739780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:02.240023   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.182638   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.182974   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.778759   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:04.279122   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.240223   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.738829   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:03.183498   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:05.681527   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.682456   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.281289   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:08.778838   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:09.238297   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:11.240258   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.182214   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:12.182485   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.778909   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.279849   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:13.739554   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.240247   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:14.182841   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.682427   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:15.778555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:17.779315   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:18.739993   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.241320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:19.182059   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.681310   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:20.278905   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.779594   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:23.739354   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.739406   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:23.682161   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.683137   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.279240   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:27.778736   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.238417   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.240302   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:28.182371   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.182431   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.182538   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.277705   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.778467   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.739086   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:35.239575   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.682089   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:36.682424   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.277613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:39.778047   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:37.743430   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.240404   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:38.682667   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.182697   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.779091   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.780368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:42.739140   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:44.739349   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.739985   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.681897   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:45.682347   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.682385   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.278368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:48.777847   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:49.238888   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:51.239699   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.182672   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.683492   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.778683   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.778843   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:53.240375   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.738235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.181390   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.181940   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.278430   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.278961   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.778449   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:57.740046   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:00.239797   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.182402   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.182516   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.779677   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.277808   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.740702   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:07.239660   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:03.682270   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.682964   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:06.278085   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.781247   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:09.240113   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.739320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.182148   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:10.681873   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:12.682490   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.278330   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:13.278868   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:14.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:16.739103   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.688049   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.779050   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.779324   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:19.238499   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:21.239793   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.181477   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.181514   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.278360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.779285   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:23.739816   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.243360   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:24.182434   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.182980   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:25.277558   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.278088   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:29.278655   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:28.739673   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.238716   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:28.681620   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:30.682755   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:32.682969   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.778900   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.240237   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.739311   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.182357   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:37.682275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.278357   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:38.279370   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:38.239229   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.240416   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:39.682587   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.187237   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.779225   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.779359   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.779661   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:42.739642   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.240529   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.681898   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:46.683048   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:47.283360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.780428   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:47.739118   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.740073   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.238919   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.182997   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.682384   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.279233   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:54.779470   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.239638   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:56.240123   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:53.682822   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:55.683248   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.279035   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:59.779371   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:58.240182   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.240387   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:58.182085   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.682855   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:02.278506   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:04.279952   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:02.739623   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.239503   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.240563   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:03.182703   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.183099   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.682942   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:06.780051   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:09.283183   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:09.738372   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.738978   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:10.183297   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:12.682617   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.778895   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.779613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:14.239433   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:14.733714   73815 pod_ready.go:82] duration metric: took 4m0.000909376s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:14.733756   73815 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:14.733773   73815 pod_ready.go:39] duration metric: took 4m10.006922238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:14.733798   73815 kubeadm.go:597] duration metric: took 4m18.227938977s to restartPrimaryControlPlane
	W0818 20:12:14.733854   73815 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:14.733884   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:15.182539   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:17.682113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.278810   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:18.279513   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:19.682712   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.182462   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:20.777741   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.779468   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:24.785394   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:24.682001   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.182491   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.278406   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.279272   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.682104   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:32.181795   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:31.779163   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:33.782706   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:34.183088   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.682409   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.278136   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:38.278938   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.943045   73815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.209137834s)
	I0818 20:12:40.943131   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:40.961902   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:40.984956   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:41.000828   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:41.000855   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:41.000908   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:41.019730   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:41.019782   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:41.031694   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:41.052082   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:41.052133   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:41.061682   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.070983   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:41.071036   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.083122   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:41.092977   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:41.093041   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:41.103081   73815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:41.155300   73815 kubeadm.go:310] W0818 20:12:41.112032    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.156131   73815 kubeadm.go:310] W0818 20:12:41.113028    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.270071   73815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:39.183290   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:41.682301   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.777979   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:42.779754   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:44.779992   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:43.683501   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:46.181489   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.616338   73815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:12:49.616432   73815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:12:49.616546   73815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:12:49.616675   73815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:12:49.616784   73815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:12:49.616877   73815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:12:49.618287   73815 out.go:235]   - Generating certificates and keys ...
	I0818 20:12:49.618354   73815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:12:49.618414   73815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:12:49.618486   73815 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:12:49.618537   73815 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:12:49.618598   73815 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:12:49.618648   73815 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:12:49.618700   73815 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:12:49.618779   73815 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:12:49.618892   73815 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:12:49.619007   73815 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:12:49.619065   73815 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:12:49.619163   73815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:12:49.619214   73815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:12:49.619269   73815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:12:49.619331   73815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:12:49.619436   73815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:12:49.619486   73815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:12:49.619556   73815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:12:49.619619   73815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:12:49.621003   73815 out.go:235]   - Booting up control plane ...
	I0818 20:12:49.621109   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:12:49.621195   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:12:49.621272   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:12:49.621380   73815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:12:49.621464   73815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:12:49.621507   73815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:12:49.621621   73815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:12:49.621715   73815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:12:49.621773   73815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.427168ms
	I0818 20:12:49.621843   73815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:12:49.621894   73815 kubeadm.go:310] [api-check] The API server is healthy after 5.00297116s
	I0818 20:12:49.621989   73815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:12:49.622127   73815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:12:49.622192   73815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:12:49.622366   73815 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-291295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:12:49.622416   73815 kubeadm.go:310] [bootstrap-token] Using token: y7e2le.i0q1jk5v0c0u0zuw
	I0818 20:12:49.623896   73815 out.go:235]   - Configuring RBAC rules ...
	I0818 20:12:49.623979   73815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:12:49.624091   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:12:49.624245   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:12:49.624354   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:12:49.624455   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:12:49.624526   73815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:12:49.624621   73815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:12:49.624675   73815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:12:49.624718   73815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:12:49.624724   73815 kubeadm.go:310] 
	I0818 20:12:49.624819   73815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:12:49.624835   73815 kubeadm.go:310] 
	I0818 20:12:49.624933   73815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:12:49.624943   73815 kubeadm.go:310] 
	I0818 20:12:49.624975   73815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:12:49.625066   73815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:12:49.625122   73815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:12:49.625135   73815 kubeadm.go:310] 
	I0818 20:12:49.625210   73815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:12:49.625217   73815 kubeadm.go:310] 
	I0818 20:12:49.625285   73815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:12:49.625295   73815 kubeadm.go:310] 
	I0818 20:12:49.625364   73815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:12:49.625469   73815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:12:49.625552   73815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:12:49.625563   73815 kubeadm.go:310] 
	I0818 20:12:49.625675   73815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:12:49.625756   73815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:12:49.625763   73815 kubeadm.go:310] 
	I0818 20:12:49.625858   73815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.625943   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:12:49.625967   73815 kubeadm.go:310] 	--control-plane 
	I0818 20:12:49.625976   73815 kubeadm.go:310] 
	I0818 20:12:49.626089   73815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:12:49.626099   73815 kubeadm.go:310] 
	I0818 20:12:49.626196   73815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.626293   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:12:49.626302   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:12:49.626308   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:12:49.627714   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:12:47.280266   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.779502   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.628998   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:12:49.639640   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:12:49.657017   73815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-291295 minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=embed-certs-291295 minikube.k8s.io/primary=true
	I0818 20:12:49.685420   73815 ops.go:34] apiserver oom_adj: -16
	I0818 20:12:49.868146   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.368174   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.868256   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.368427   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.868632   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:52.368585   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:48.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:50.681743   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.683179   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.869122   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.368635   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.869162   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.368223   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.490893   73815 kubeadm.go:1113] duration metric: took 4.833865719s to wait for elevateKubeSystemPrivileges
	I0818 20:12:54.490919   73815 kubeadm.go:394] duration metric: took 4m58.032922921s to StartCluster
	I0818 20:12:54.490936   73815 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.491011   73815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:12:54.492769   73815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.493007   73815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:12:54.493069   73815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:12:54.493160   73815 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-291295"
	I0818 20:12:54.493186   73815 addons.go:69] Setting default-storageclass=true in profile "embed-certs-291295"
	I0818 20:12:54.493208   73815 addons.go:69] Setting metrics-server=true in profile "embed-certs-291295"
	I0818 20:12:54.493226   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:12:54.493234   73815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-291295"
	I0818 20:12:54.493250   73815 addons.go:234] Setting addon metrics-server=true in "embed-certs-291295"
	W0818 20:12:54.493263   73815 addons.go:243] addon metrics-server should already be in state true
	I0818 20:12:54.493293   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493197   73815 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-291295"
	W0818 20:12:54.493423   73815 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:12:54.493454   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493667   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493695   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493799   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493824   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493839   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493856   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.494988   73815 out.go:177] * Verifying Kubernetes components...
	I0818 20:12:54.496631   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0818 20:12:54.510362   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0818 20:12:54.510861   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510893   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510904   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.511362   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511394   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511392   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511411   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511512   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511532   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511721   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511770   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511858   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.512040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.512246   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512269   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512275   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.512287   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.515662   73815 addons.go:234] Setting addon default-storageclass=true in "embed-certs-291295"
	W0818 20:12:54.515684   73815 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:12:54.515713   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.516066   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.516113   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.532752   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0818 20:12:54.532798   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0818 20:12:54.533454   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.533570   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.534099   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534122   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534237   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534256   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534374   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534590   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534626   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.534665   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0818 20:12:54.534909   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.535373   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.535793   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.535808   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.536326   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.536411   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.536941   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.538860   73815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:12:54.538862   73815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:12:52.279487   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.279652   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.539061   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.539290   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.540006   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:12:54.540024   73815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:12:54.540043   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.540104   73815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.540119   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:12:54.540144   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.543782   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544017   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544131   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544154   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544293   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544565   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.544734   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.544754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544887   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.545060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.545257   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.545502   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.558292   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0818 20:12:54.558721   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.559184   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.559200   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.559579   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.559764   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.561412   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.562138   73815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.562153   73815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:12:54.562169   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.565078   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565524   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.565543   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565782   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.565954   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.566107   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.566265   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.738286   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:12:54.804581   73815 node_ready.go:35] waiting up to 6m0s for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813953   73815 node_ready.go:49] node "embed-certs-291295" has status "Ready":"True"
	I0818 20:12:54.813984   73815 node_ready.go:38] duration metric: took 9.367719ms for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813995   73815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:54.820670   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:12:54.884787   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:12:54.884808   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:12:54.891500   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.917894   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.939854   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:12:54.939873   73815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:12:55.023663   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:55.023684   73815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:12:55.049846   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:56.106099   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.188173933s)
	I0818 20:12:56.106164   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106173   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106502   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106504   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.106519   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.106529   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106537   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106774   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106788   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107412   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21588373s)
	I0818 20:12:56.107447   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107459   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.107656   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.107729   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.107739   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107747   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.108054   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.108095   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.108105   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.163788   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.163816   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.164087   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.164137   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239269   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189381338s)
	I0818 20:12:56.239327   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239341   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.239712   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.239767   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239748   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.239782   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239792   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.240000   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.240017   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.240028   73815 addons.go:475] Verifying addon metrics-server=true in "embed-certs-291295"
	I0818 20:12:56.241750   73815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:12:56.243157   73815 addons.go:510] duration metric: took 1.750082977s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:12:56.827912   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:55.184449   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:57.676039   74485 pod_ready.go:82] duration metric: took 4m0.000245975s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:57.676064   74485 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:57.676106   74485 pod_ready.go:39] duration metric: took 4m11.533331444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:57.676138   74485 kubeadm.go:597] duration metric: took 4m20.628972956s to restartPrimaryControlPlane
	W0818 20:12:57.676203   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:57.676230   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:56.778171   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:58.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:59.328683   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.331560   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.281134   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.281507   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.828543   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.828572   73815 pod_ready.go:82] duration metric: took 9.007869564s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.828586   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833396   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.833416   73815 pod_ready.go:82] duration metric: took 4.823533ms for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833426   73815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837837   73815 pod_ready.go:93] pod "etcd-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.837856   73815 pod_ready.go:82] duration metric: took 4.422926ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837864   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842646   73815 pod_ready.go:93] pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.842666   73815 pod_ready.go:82] duration metric: took 4.795789ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842675   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846697   73815 pod_ready.go:93] pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.846721   73815 pod_ready.go:82] duration metric: took 4.038999ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846733   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224066   73815 pod_ready.go:93] pod "kube-proxy-8mv85" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.224088   73815 pod_ready.go:82] duration metric: took 377.347897ms for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224097   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624310   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.624337   73815 pod_ready.go:82] duration metric: took 400.233574ms for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624347   73815 pod_ready.go:39] duration metric: took 9.810340936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:04.624363   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:04.624440   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:04.640514   73815 api_server.go:72] duration metric: took 10.147475745s to wait for apiserver process to appear ...
	I0818 20:13:04.640543   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:04.640565   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:13:04.646120   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:13:04.646969   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:04.646989   73815 api_server.go:131] duration metric: took 6.438722ms to wait for apiserver health ...
	I0818 20:13:04.646999   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:04.828347   73815 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:04.828385   73815 system_pods.go:61] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:04.828393   73815 system_pods.go:61] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:04.828398   73815 system_pods.go:61] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:04.828403   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:04.828416   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:04.828420   73815 system_pods.go:61] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:04.828425   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:04.828434   73815 system_pods.go:61] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:04.828441   73815 system_pods.go:61] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:04.828453   73815 system_pods.go:74] duration metric: took 181.44906ms to wait for pod list to return data ...
	I0818 20:13:04.828465   73815 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:05.030945   73815 default_sa.go:45] found service account: "default"
	I0818 20:13:05.030971   73815 default_sa.go:55] duration metric: took 202.497269ms for default service account to be created ...
	I0818 20:13:05.030981   73815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:05.226724   73815 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:05.226760   73815 system_pods.go:89] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:05.226769   73815 system_pods.go:89] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:05.226775   73815 system_pods.go:89] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:05.226781   73815 system_pods.go:89] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:05.226790   73815 system_pods.go:89] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:05.226795   73815 system_pods.go:89] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:05.226801   73815 system_pods.go:89] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:05.226810   73815 system_pods.go:89] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:05.226820   73815 system_pods.go:89] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:05.226831   73815 system_pods.go:126] duration metric: took 195.843628ms to wait for k8s-apps to be running ...
	I0818 20:13:05.226843   73815 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:05.226892   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:05.242656   73815 system_svc.go:56] duration metric: took 15.80684ms WaitForService to wait for kubelet
	I0818 20:13:05.242681   73815 kubeadm.go:582] duration metric: took 10.749648174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:05.242698   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:05.424616   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:05.424642   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:05.424654   73815 node_conditions.go:105] duration metric: took 181.951421ms to run NodePressure ...
	I0818 20:13:05.424668   73815 start.go:241] waiting for startup goroutines ...
	I0818 20:13:05.424678   73815 start.go:246] waiting for cluster config update ...
	I0818 20:13:05.424692   73815 start.go:255] writing updated cluster config ...
	I0818 20:13:05.425003   73815 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:05.470859   73815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:05.472909   73815 out.go:177] * Done! kubectl is now configured to use "embed-certs-291295" cluster and "default" namespace by default
	I0818 20:13:05.779555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:07.783567   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:10.281617   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:12.780570   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:15.282024   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:17.779399   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:23.788389   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.112134895s)
	I0818 20:13:23.788470   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:23.808611   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:13:23.820139   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:13:23.837253   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:13:23.837282   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:13:23.837345   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:13:23.848522   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:13:23.848595   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:13:23.857891   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:13:23.866756   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:13:23.866814   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:13:23.876332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.885435   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:13:23.885535   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.896120   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:13:23.905471   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:13:23.905565   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:13:23.915157   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:13:23.963756   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:13:23.963830   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:13:24.083423   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:13:24.083592   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:13:24.083733   74485 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:13:24.097967   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:13:24.099859   74485 out.go:235]   - Generating certificates and keys ...
	I0818 20:13:24.099926   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:13:24.100020   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:13:24.100125   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:13:24.100212   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:13:24.100310   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:13:24.100389   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:13:24.100476   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:13:24.100592   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:13:24.100711   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:13:24.100829   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:13:24.100891   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:13:24.100978   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:13:24.298737   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:13:24.592511   74485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:13:24.686316   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:13:24.796124   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:13:24.910646   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:13:24.911060   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:13:24.913486   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:13:20.281479   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:22.779269   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:24.914894   74485 out.go:235]   - Booting up control plane ...
	I0818 20:13:24.915018   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:13:24.915106   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:13:24.915303   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:13:24.938289   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:13:24.944304   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:13:24.944367   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:13:25.078685   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:13:25.078813   74485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:13:25.580725   74485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092954ms
	I0818 20:13:25.580847   74485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:13:25.280695   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:27.285875   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:29.779058   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:30.583574   74485 kubeadm.go:310] [api-check] The API server is healthy after 5.001121585s
	I0818 20:13:30.596453   74485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:13:30.616459   74485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:13:30.647753   74485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:13:30.648063   74485 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-852598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:13:30.661702   74485 kubeadm.go:310] [bootstrap-token] Using token: zx02gp.uvda3nvhhfc3i2l5
	I0818 20:13:30.663166   74485 out.go:235]   - Configuring RBAC rules ...
	I0818 20:13:30.663321   74485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:13:30.671440   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:13:30.682462   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:13:30.690376   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:13:30.699091   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:13:30.704304   74485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:13:30.989576   74485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:13:31.435191   74485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:13:31.989155   74485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:13:31.991090   74485 kubeadm.go:310] 
	I0818 20:13:31.991172   74485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:13:31.991188   74485 kubeadm.go:310] 
	I0818 20:13:31.991285   74485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:13:31.991303   74485 kubeadm.go:310] 
	I0818 20:13:31.991337   74485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:13:31.991506   74485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:13:31.991584   74485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:13:31.991605   74485 kubeadm.go:310] 
	I0818 20:13:31.991710   74485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:13:31.991732   74485 kubeadm.go:310] 
	I0818 20:13:31.991802   74485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:13:31.991814   74485 kubeadm.go:310] 
	I0818 20:13:31.991881   74485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:13:31.991986   74485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:13:31.992101   74485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:13:31.992132   74485 kubeadm.go:310] 
	I0818 20:13:31.992250   74485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:13:31.992345   74485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:13:31.992358   74485 kubeadm.go:310] 
	I0818 20:13:31.992464   74485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.992601   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:13:31.992637   74485 kubeadm.go:310] 	--control-plane 
	I0818 20:13:31.992650   74485 kubeadm.go:310] 
	I0818 20:13:31.992760   74485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:13:31.992778   74485 kubeadm.go:310] 
	I0818 20:13:31.992882   74485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.993030   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:13:31.994898   74485 kubeadm.go:310] W0818 20:13:23.918436    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995217   74485 kubeadm.go:310] W0818 20:13:23.919152    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995365   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:13:31.995413   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:13:31.995423   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:13:31.997188   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:13:31.998506   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:13:32.011472   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:13:32.031405   74485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:13:32.031449   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.031494   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-852598 minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=default-k8s-diff-port-852598 minikube.k8s.io/primary=true
	I0818 20:13:32.244997   74485 ops.go:34] apiserver oom_adj: -16
	I0818 20:13:32.245096   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.745775   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.279538   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:32.779152   73711 pod_ready.go:82] duration metric: took 4m0.006755386s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:13:32.779180   73711 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 20:13:32.779190   73711 pod_ready.go:39] duration metric: took 4m7.418715902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:32.779207   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:32.779240   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:32.779298   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:32.848109   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:32.848132   73711 cri.go:89] found id: ""
	I0818 20:13:32.848141   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:32.848201   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.852725   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:32.852789   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:32.899932   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:32.899957   73711 cri.go:89] found id: ""
	I0818 20:13:32.899969   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:32.900028   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.904698   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:32.904771   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:32.945320   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:32.945347   73711 cri.go:89] found id: ""
	I0818 20:13:32.945355   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:32.945411   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.949873   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:32.949935   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:32.986388   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:32.986409   73711 cri.go:89] found id: ""
	I0818 20:13:32.986415   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:32.986465   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.992213   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:32.992292   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:33.035535   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:33.035557   73711 cri.go:89] found id: ""
	I0818 20:13:33.035564   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:33.035622   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.039933   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:33.040006   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:33.077372   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:33.077395   73711 cri.go:89] found id: ""
	I0818 20:13:33.077404   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:33.077468   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.082254   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:33.082327   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:33.120142   73711 cri.go:89] found id: ""
	I0818 20:13:33.120181   73711 logs.go:276] 0 containers: []
	W0818 20:13:33.120192   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:33.120199   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:33.120267   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:33.159065   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:33.159089   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.159095   73711 cri.go:89] found id: ""
	I0818 20:13:33.159104   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:33.159164   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.163366   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.167301   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:33.167327   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:33.207982   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:33.208012   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:33.734525   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:33.734563   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:33.779286   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:33.779334   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:33.915330   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:33.915365   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:33.930057   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:33.930088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:33.978282   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:33.978312   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:34.021464   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:34.021495   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:34.058242   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:34.058271   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:34.094203   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:34.094231   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:34.157812   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:34.157849   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:34.196259   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:34.196288   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:34.273774   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:34.273818   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.245388   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:33.745166   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.245920   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.745548   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.245436   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.745269   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.245383   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.384146   74485 kubeadm.go:1113] duration metric: took 4.352781371s to wait for elevateKubeSystemPrivileges
	I0818 20:13:36.384182   74485 kubeadm.go:394] duration metric: took 4m59.395903283s to StartCluster
	I0818 20:13:36.384199   74485 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.384286   74485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:13:36.385964   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.386201   74485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:13:36.386320   74485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:13:36.386400   74485 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386423   74485 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386440   74485 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386458   74485 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386470   74485 addons.go:243] addon metrics-server should already be in state true
	I0818 20:13:36.386477   74485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-852598"
	I0818 20:13:36.386514   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386434   74485 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386567   74485 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:13:36.386612   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386435   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:13:36.386858   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386887   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386915   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386948   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386982   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.387015   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.387748   74485 out.go:177] * Verifying Kubernetes components...
	I0818 20:13:36.389177   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:13:36.402895   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0818 20:13:36.402928   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0818 20:13:36.403477   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.403479   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404111   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404120   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404519   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404525   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404795   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.405161   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.405192   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.405739   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0818 20:13:36.406246   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.406753   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.406779   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.407167   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.407726   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.407771   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.408687   74485 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.408710   74485 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:13:36.408736   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.409073   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.409120   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.423471   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 20:13:36.423953   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.424569   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.424588   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.424652   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0818 20:13:36.424966   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.425039   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.425257   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.425447   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.425462   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.425911   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.426098   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.427104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.427772   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.428108   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0818 20:13:36.428438   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.428794   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.428816   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.429092   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.429645   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.429696   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.429708   74485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:13:36.429758   74485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:13:36.431859   74485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.431879   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:13:36.431898   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.431958   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:13:36.431969   74485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:13:36.431983   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.435295   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435730   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.435757   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436192   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.436238   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.436254   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.436312   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.436528   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436570   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.436890   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.437171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.437355   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.447762   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0818 20:13:36.448303   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.448694   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.448713   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.449011   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.449160   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.450722   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.450918   74485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.450935   74485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:13:36.450954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.453529   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.453969   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.453992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.454163   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.454862   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.455104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.455246   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.606178   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:13:36.628852   74485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702927   74485 node_ready.go:49] node "default-k8s-diff-port-852598" has status "Ready":"True"
	I0818 20:13:36.702956   74485 node_ready.go:38] duration metric: took 74.077289ms for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702968   74485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:36.713446   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:36.726670   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:13:36.726689   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:13:36.741673   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.784451   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.790772   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:13:36.790798   74485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:13:36.845289   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:36.845315   74485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:13:36.914259   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:37.542511   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542538   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542559   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542543   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542874   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542914   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542922   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542932   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542941   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542953   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542963   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542971   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.543114   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.543123   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545016   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.545041   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545059   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572618   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.572643   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.572953   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572976   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.572989   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.793891   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.793918   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794436   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.794453   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794467   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794479   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.794487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794747   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794762   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794774   74485 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-852598"
	I0818 20:13:37.796423   74485 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:13:36.814874   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:36.838208   73711 api_server.go:72] duration metric: took 4m18.723396382s to wait for apiserver process to appear ...
	I0818 20:13:36.838234   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:36.838276   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:36.838334   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:36.890010   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:36.890036   73711 cri.go:89] found id: ""
	I0818 20:13:36.890046   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:36.890108   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.895675   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:36.895753   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:36.953110   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:36.953162   73711 cri.go:89] found id: ""
	I0818 20:13:36.953172   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:36.953230   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.959359   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:36.959456   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:37.011217   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.011248   73711 cri.go:89] found id: ""
	I0818 20:13:37.011258   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:37.011333   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.016895   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:37.016988   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:37.067705   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:37.067728   73711 cri.go:89] found id: ""
	I0818 20:13:37.067737   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:37.067794   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.073259   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:37.073332   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:37.112192   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.112216   73711 cri.go:89] found id: ""
	I0818 20:13:37.112226   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:37.112285   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.116988   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:37.117060   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:37.153720   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.153744   73711 cri.go:89] found id: ""
	I0818 20:13:37.153753   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:37.153811   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.158160   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:37.158226   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:37.197088   73711 cri.go:89] found id: ""
	I0818 20:13:37.197120   73711 logs.go:276] 0 containers: []
	W0818 20:13:37.197143   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:37.197151   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:37.197215   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:37.241214   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:37.241242   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:37.241248   73711 cri.go:89] found id: ""
	I0818 20:13:37.241257   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:37.241317   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.246159   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.250431   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:37.250460   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:37.313787   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:37.313817   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:37.333235   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:37.333263   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:37.461197   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:37.461236   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.505314   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:37.505343   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.576096   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:37.576121   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:38.083667   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:38.083702   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:38.128922   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:38.128947   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:38.170807   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:38.170842   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:38.265750   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:38.265784   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:38.323224   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:38.323269   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:38.372486   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:38.372530   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:38.413945   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:38.413986   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.798152   74485 addons.go:510] duration metric: took 1.411833485s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:13:38.719805   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.720446   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:40.720472   74485 pod_ready.go:82] duration metric: took 4.00699808s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:40.720482   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:42.728159   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.955186   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:13:40.960201   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:13:40.961240   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:40.961260   73711 api_server.go:131] duration metric: took 4.123017717s to wait for apiserver health ...
	I0818 20:13:40.961273   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:40.961298   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:40.961350   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:41.012093   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.012113   73711 cri.go:89] found id: ""
	I0818 20:13:41.012121   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:41.012172   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.016282   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:41.016337   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:41.063834   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.063861   73711 cri.go:89] found id: ""
	I0818 20:13:41.063871   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:41.063930   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.068645   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:41.068724   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:41.117544   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:41.117565   73711 cri.go:89] found id: ""
	I0818 20:13:41.117573   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:41.117626   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.121916   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:41.121985   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:41.161641   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:41.161660   73711 cri.go:89] found id: ""
	I0818 20:13:41.161667   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:41.161720   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.165727   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:41.165778   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:41.207519   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:41.207544   73711 cri.go:89] found id: ""
	I0818 20:13:41.207554   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:41.207615   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.212114   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:41.212171   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:41.255480   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:41.255501   73711 cri.go:89] found id: ""
	I0818 20:13:41.255508   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:41.255560   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.259585   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:41.259635   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:41.312099   73711 cri.go:89] found id: ""
	I0818 20:13:41.312124   73711 logs.go:276] 0 containers: []
	W0818 20:13:41.312131   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:41.312137   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:41.312201   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:41.358622   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:41.358647   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.358653   73711 cri.go:89] found id: ""
	I0818 20:13:41.358662   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:41.358723   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.363210   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.367271   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:41.367294   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.406329   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:41.406355   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:41.768140   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:41.768175   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:41.811010   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:41.811035   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:41.886206   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:41.886240   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.938249   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:41.938284   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.977289   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:41.977317   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:42.018606   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:42.018630   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:42.055557   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:42.055581   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:42.070467   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:42.070494   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:42.182068   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:42.182100   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:42.219346   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:42.219373   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:42.262193   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:42.262221   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:44.839152   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:13:44.839181   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.839186   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.839191   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.839194   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.839197   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.839200   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.839206   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.839212   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.839218   73711 system_pods.go:74] duration metric: took 3.877940537s to wait for pod list to return data ...
	I0818 20:13:44.839225   73711 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:44.841877   73711 default_sa.go:45] found service account: "default"
	I0818 20:13:44.841896   73711 default_sa.go:55] duration metric: took 2.662355ms for default service account to be created ...
	I0818 20:13:44.841904   73711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:44.846214   73711 system_pods.go:86] 8 kube-system pods found
	I0818 20:13:44.846240   73711 system_pods.go:89] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.846247   73711 system_pods.go:89] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.846252   73711 system_pods.go:89] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.846259   73711 system_pods.go:89] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.846264   73711 system_pods.go:89] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.846269   73711 system_pods.go:89] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.846279   73711 system_pods.go:89] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.846286   73711 system_pods.go:89] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.846296   73711 system_pods.go:126] duration metric: took 4.386348ms to wait for k8s-apps to be running ...
	I0818 20:13:44.846305   73711 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:44.846356   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:44.863225   73711 system_svc.go:56] duration metric: took 16.912117ms WaitForService to wait for kubelet
	I0818 20:13:44.863262   73711 kubeadm.go:582] duration metric: took 4m26.748456958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:44.863287   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:44.866049   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:44.866069   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:44.866082   73711 node_conditions.go:105] duration metric: took 2.789471ms to run NodePressure ...
	I0818 20:13:44.866095   73711 start.go:241] waiting for startup goroutines ...
	I0818 20:13:44.866103   73711 start.go:246] waiting for cluster config update ...
	I0818 20:13:44.866135   73711 start.go:255] writing updated cluster config ...
	I0818 20:13:44.866415   73711 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:44.914902   73711 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:44.916929   73711 out.go:177] * Done! kubectl is now configured to use "no-preload-944426" cluster and "default" namespace by default
	I0818 20:13:45.226521   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:47.226773   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:48.227026   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.227050   74485 pod_ready.go:82] duration metric: took 7.506560684s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.227061   74485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231313   74485 pod_ready.go:93] pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.231336   74485 pod_ready.go:82] duration metric: took 4.268255ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231345   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235228   74485 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.235249   74485 pod_ready.go:82] duration metric: took 3.897729ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235259   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238872   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.238889   74485 pod_ready.go:82] duration metric: took 3.623044ms for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238897   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243264   74485 pod_ready.go:93] pod "kube-proxy-hmvsl" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.243282   74485 pod_ready.go:82] duration metric: took 4.378808ms for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243292   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625076   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.625101   74485 pod_ready.go:82] duration metric: took 381.800619ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625111   74485 pod_ready.go:39] duration metric: took 11.92213071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:48.625128   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:48.625193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:48.640038   74485 api_server.go:72] duration metric: took 12.253809178s to wait for apiserver process to appear ...
	I0818 20:13:48.640061   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:48.640081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:13:48.644433   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:13:48.645289   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:48.645306   74485 api_server.go:131] duration metric: took 5.239358ms to wait for apiserver health ...
	I0818 20:13:48.645313   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:48.829655   74485 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:48.829698   74485 system_pods.go:61] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:48.829706   74485 system_pods.go:61] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:48.829718   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:48.829726   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:48.829731   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:48.829737   74485 system_pods.go:61] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:48.829742   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:48.829754   74485 system_pods.go:61] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:48.829760   74485 system_pods.go:61] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:48.829770   74485 system_pods.go:74] duration metric: took 184.451133ms to wait for pod list to return data ...
	I0818 20:13:48.829783   74485 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:49.023954   74485 default_sa.go:45] found service account: "default"
	I0818 20:13:49.023982   74485 default_sa.go:55] duration metric: took 194.191689ms for default service account to be created ...
	I0818 20:13:49.023992   74485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:49.227864   74485 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:49.227892   74485 system_pods.go:89] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:49.227898   74485 system_pods.go:89] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:49.227902   74485 system_pods.go:89] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:49.227907   74485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:49.227911   74485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:49.227915   74485 system_pods.go:89] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:49.227918   74485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:49.227925   74485 system_pods.go:89] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:49.227930   74485 system_pods.go:89] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:49.227936   74485 system_pods.go:126] duration metric: took 203.939768ms to wait for k8s-apps to be running ...
	I0818 20:13:49.227945   74485 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:49.227989   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:49.242762   74485 system_svc.go:56] duration metric: took 14.808746ms WaitForService to wait for kubelet
	I0818 20:13:49.242793   74485 kubeadm.go:582] duration metric: took 12.856565711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:49.242819   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:49.425517   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:49.425543   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:49.425555   74485 node_conditions.go:105] duration metric: took 182.731125ms to run NodePressure ...
	I0818 20:13:49.425569   74485 start.go:241] waiting for startup goroutines ...
	I0818 20:13:49.425577   74485 start.go:246] waiting for cluster config update ...
	I0818 20:13:49.425588   74485 start.go:255] writing updated cluster config ...
	I0818 20:13:49.425898   74485 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:49.473176   74485 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:49.475285   74485 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-852598" cluster and "default" namespace by default
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 
	
	
	==> CRI-O <==
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.622735637Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012527622709572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a33dd3f-8238-45fe-ba89-ab4d65e11215 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.623311739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f0e2587e-c828-494d-9c95-de74795eeca3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.623442729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f0e2587e-c828-494d-9c95-de74795eeca3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.624184404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090,PodSandboxId:f518f87205216bb91cfd93e4e69aa3075bef1064f921da48c479d9b815481e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011976512646308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89c78dc-0141-45b6-889c-9381599a39e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec,PodSandboxId:4a123aa652725925e80353cabc065447d8477f1bc1f36b623dd89e1a46467e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975557122055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fx7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42876c85-5d36-47b3-ba18-2cc7e3edcfd2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05,PodSandboxId:2019feb2f004cc27b2d9bdeec8906e4fed0c653c4208b489f14a87e63febfd4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975391440415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6785z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
e4a0570-184c-4de8-a23d-05cc0409a71f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc,PodSandboxId:9d190df7bad335f62164e16f08abffd80b09fcafb3029b62bab3e7e712bf1f03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724011974708669136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8mv85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46ec5d3-9303-47c1-b374-b0402d54427d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b,PodSandboxId:8ea0cb042a7fd8d22433b258b60007137dc4d96d023a890065754809482f806d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011963799283434,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4,PodSandboxId:e00431662dad23fb0af046ea926683b6781d17bd7dd2f5b3ea1735b80d9c8e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011963764829802,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b3f6826255983bf4f8dc44ddd29d67,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690,PodSandboxId:5ac93ea9b904148a749ab7adc26125201f8b4ca83c8a1b9e6ae63260e198b27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011963752073740,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8985672dbacf5e7fbe155505efa34c2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289,PodSandboxId:a91035800220aae2b589bd7e31946c854c20014fcc5049f10bf3c8640d63295e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011963689093858,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd,PodSandboxId:25c7699f0112af6a8032de477d66766ac8f6fd5fd054700742d3b5a8e5175e36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011679722597694,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f0e2587e-c828-494d-9c95-de74795eeca3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.667841352Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7afdf3bb-3160-4e22-9a62-d8f78548c525 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.667918011Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7afdf3bb-3160-4e22-9a62-d8f78548c525 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.669152097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18236ec0-aecd-49b9-98f5-f51272584412 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.669732377Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012527669706994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18236ec0-aecd-49b9-98f5-f51272584412 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.670249028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88e4ce72-b562-4449-8187-70535af1c9b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.670321825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88e4ce72-b562-4449-8187-70535af1c9b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.670602368Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090,PodSandboxId:f518f87205216bb91cfd93e4e69aa3075bef1064f921da48c479d9b815481e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011976512646308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89c78dc-0141-45b6-889c-9381599a39e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec,PodSandboxId:4a123aa652725925e80353cabc065447d8477f1bc1f36b623dd89e1a46467e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975557122055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fx7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42876c85-5d36-47b3-ba18-2cc7e3edcfd2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05,PodSandboxId:2019feb2f004cc27b2d9bdeec8906e4fed0c653c4208b489f14a87e63febfd4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975391440415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6785z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
e4a0570-184c-4de8-a23d-05cc0409a71f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc,PodSandboxId:9d190df7bad335f62164e16f08abffd80b09fcafb3029b62bab3e7e712bf1f03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724011974708669136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8mv85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46ec5d3-9303-47c1-b374-b0402d54427d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b,PodSandboxId:8ea0cb042a7fd8d22433b258b60007137dc4d96d023a890065754809482f806d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011963799283434,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4,PodSandboxId:e00431662dad23fb0af046ea926683b6781d17bd7dd2f5b3ea1735b80d9c8e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011963764829802,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b3f6826255983bf4f8dc44ddd29d67,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690,PodSandboxId:5ac93ea9b904148a749ab7adc26125201f8b4ca83c8a1b9e6ae63260e198b27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011963752073740,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8985672dbacf5e7fbe155505efa34c2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289,PodSandboxId:a91035800220aae2b589bd7e31946c854c20014fcc5049f10bf3c8640d63295e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011963689093858,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd,PodSandboxId:25c7699f0112af6a8032de477d66766ac8f6fd5fd054700742d3b5a8e5175e36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011679722597694,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88e4ce72-b562-4449-8187-70535af1c9b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.709762182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2b1c3c2-6acd-4064-aa0d-b50ba08f677a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.709859361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2b1c3c2-6acd-4064-aa0d-b50ba08f677a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.710895180Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=270cfc00-e899-4596-aaef-da3bddb5e2ad name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.711756717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012527711729845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=270cfc00-e899-4596-aaef-da3bddb5e2ad name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.712196579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2770cb86-dc2f-436e-a2a3-7f00cf2edefc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.712269482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2770cb86-dc2f-436e-a2a3-7f00cf2edefc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.712568204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090,PodSandboxId:f518f87205216bb91cfd93e4e69aa3075bef1064f921da48c479d9b815481e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011976512646308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89c78dc-0141-45b6-889c-9381599a39e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec,PodSandboxId:4a123aa652725925e80353cabc065447d8477f1bc1f36b623dd89e1a46467e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975557122055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fx7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42876c85-5d36-47b3-ba18-2cc7e3edcfd2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05,PodSandboxId:2019feb2f004cc27b2d9bdeec8906e4fed0c653c4208b489f14a87e63febfd4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975391440415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6785z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
e4a0570-184c-4de8-a23d-05cc0409a71f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc,PodSandboxId:9d190df7bad335f62164e16f08abffd80b09fcafb3029b62bab3e7e712bf1f03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724011974708669136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8mv85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46ec5d3-9303-47c1-b374-b0402d54427d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b,PodSandboxId:8ea0cb042a7fd8d22433b258b60007137dc4d96d023a890065754809482f806d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011963799283434,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4,PodSandboxId:e00431662dad23fb0af046ea926683b6781d17bd7dd2f5b3ea1735b80d9c8e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011963764829802,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b3f6826255983bf4f8dc44ddd29d67,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690,PodSandboxId:5ac93ea9b904148a749ab7adc26125201f8b4ca83c8a1b9e6ae63260e198b27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011963752073740,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8985672dbacf5e7fbe155505efa34c2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289,PodSandboxId:a91035800220aae2b589bd7e31946c854c20014fcc5049f10bf3c8640d63295e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011963689093858,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd,PodSandboxId:25c7699f0112af6a8032de477d66766ac8f6fd5fd054700742d3b5a8e5175e36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011679722597694,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2770cb86-dc2f-436e-a2a3-7f00cf2edefc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.746908597Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8a0946e-70d0-4624-bf8b-7039bbb23bbd name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.747177222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8a0946e-70d0-4624-bf8b-7039bbb23bbd name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.748876820Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0766e6ae-e2ec-4225-917c-310034624090 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.749261141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012527749239674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0766e6ae-e2ec-4225-917c-310034624090 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.749779450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ac1739e-5b15-4e24-bd68-472c371f7175 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.749831606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ac1739e-5b15-4e24-bd68-472c371f7175 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:07 embed-certs-291295 crio[726]: time="2024-08-18 20:22:07.750032114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090,PodSandboxId:f518f87205216bb91cfd93e4e69aa3075bef1064f921da48c479d9b815481e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011976512646308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89c78dc-0141-45b6-889c-9381599a39e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec,PodSandboxId:4a123aa652725925e80353cabc065447d8477f1bc1f36b623dd89e1a46467e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975557122055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fx7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42876c85-5d36-47b3-ba18-2cc7e3edcfd2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05,PodSandboxId:2019feb2f004cc27b2d9bdeec8906e4fed0c653c4208b489f14a87e63febfd4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975391440415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6785z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
e4a0570-184c-4de8-a23d-05cc0409a71f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc,PodSandboxId:9d190df7bad335f62164e16f08abffd80b09fcafb3029b62bab3e7e712bf1f03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724011974708669136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8mv85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46ec5d3-9303-47c1-b374-b0402d54427d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b,PodSandboxId:8ea0cb042a7fd8d22433b258b60007137dc4d96d023a890065754809482f806d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011963799283434,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4,PodSandboxId:e00431662dad23fb0af046ea926683b6781d17bd7dd2f5b3ea1735b80d9c8e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011963764829802,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b3f6826255983bf4f8dc44ddd29d67,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690,PodSandboxId:5ac93ea9b904148a749ab7adc26125201f8b4ca83c8a1b9e6ae63260e198b27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011963752073740,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8985672dbacf5e7fbe155505efa34c2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289,PodSandboxId:a91035800220aae2b589bd7e31946c854c20014fcc5049f10bf3c8640d63295e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011963689093858,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd,PodSandboxId:25c7699f0112af6a8032de477d66766ac8f6fd5fd054700742d3b5a8e5175e36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011679722597694,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ac1739e-5b15-4e24-bd68-472c371f7175 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f0de7e681aa8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   f518f87205216       storage-provisioner
	f4e0e21856013       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   4a123aa652725       coredns-6f6b679f8f-fx7zv
	f267f32a822b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2019feb2f004c       coredns-6f6b679f8f-6785z
	e424c63d99aaa       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   9d190df7bad33       kube-proxy-8mv85
	1b2a5ae4a234a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   8ea0cb042a7fd       kube-apiserver-embed-certs-291295
	4d4358c293b3f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   e00431662dad2       kube-controller-manager-embed-certs-291295
	c0fc030335e27       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   5ac93ea9b9041       etcd-embed-certs-291295
	bed4b8074227b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   a91035800220a       kube-scheduler-embed-certs-291295
	3669d8f48420b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   25c7699f0112a       kube-apiserver-embed-certs-291295
	
	
	==> coredns [f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-291295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-291295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=embed-certs-291295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 20:12:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-291295
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 20:22:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 20:18:05 +0000   Sun, 18 Aug 2024 20:12:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 20:18:05 +0000   Sun, 18 Aug 2024 20:12:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 20:18:05 +0000   Sun, 18 Aug 2024 20:12:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 20:18:05 +0000   Sun, 18 Aug 2024 20:12:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    embed-certs-291295
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac8770b9ab2443f4b3f49d534e03a2f9
	  System UUID:                ac8770b9-ab24-43f4-b3f4-9d534e03a2f9
	  Boot ID:                    09586b2d-ed77-4128-a371-c04b89982a74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-6785z                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-6f6b679f8f-fx7zv                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-embed-certs-291295                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-embed-certs-291295             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-291295    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-8mv85                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-embed-certs-291295             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-q9hsn               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m13s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m12s  kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-291295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-291295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-291295 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s  node-controller  Node embed-certs-291295 event: Registered Node embed-certs-291295 in Controller
	
	
	==> dmesg <==
	[  +0.050152] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040249] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.769839] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.380126] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.632233] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.086018] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.058078] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061516] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.214477] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.135342] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.309673] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.231008] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.057862] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.640665] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[Aug18 20:08] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.977381] kauditd_printk_skb: 85 callbacks suppressed
	[Aug18 20:12] systemd-fstab-generator[2585]: Ignoring "noauto" option for root device
	[  +0.071066] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.995172] systemd-fstab-generator[2906]: Ignoring "noauto" option for root device
	[  +0.093774] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.814517] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.007216] systemd-fstab-generator[3049]: Ignoring "noauto" option for root device
	[Aug18 20:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690] <==
	{"level":"info","ts":"2024-08-18T20:12:44.077223Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-18T20:12:44.077921Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-08-18T20:12:44.082152Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.125:2380"}
	{"level":"info","ts":"2024-08-18T20:12:44.082796Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"f4d3edba9e42b28c","initial-advertise-peer-urls":["https://192.168.39.125:2380"],"listen-peer-urls":["https://192.168.39.125:2380"],"advertise-client-urls":["https://192.168.39.125:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.125:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-18T20:12:44.082888Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T20:12:44.802571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-18T20:12:44.802686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-18T20:12:44.802745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 1"}
	{"level":"info","ts":"2024-08-18T20:12:44.802780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 2"}
	{"level":"info","ts":"2024-08-18T20:12:44.802805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2024-08-18T20:12:44.802832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 2"}
	{"level":"info","ts":"2024-08-18T20:12:44.802858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2024-08-18T20:12:44.807735Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:12:44.811809Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:embed-certs-291295 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T20:12:44.814604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:12:44.814994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:12:44.815230Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:12:44.815337Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:12:44.815378Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:12:44.816079Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:12:44.821692Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"info","ts":"2024-08-18T20:12:44.823075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:12:44.823840Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T20:12:44.831557Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T20:12:44.831681Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:22:08 up 14 min,  0 users,  load average: 0.17, 0.24, 0.15
	Linux embed-certs-291295 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b] <==
	E0818 20:17:47.380782       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0818 20:17:47.380910       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0818 20:17:47.382137       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:17:47.382191       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:18:47.382870       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:18:47.382975       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0818 20:18:47.383074       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:18:47.383155       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0818 20:18:47.384157       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:18:47.384245       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:20:47.384936       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:20:47.385295       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:20:47.385470       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:20:47.385612       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0818 20:20:47.386474       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:20:47.387737       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd] <==
	W0818 20:12:39.560358       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.636151       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.643799       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.652267       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.686669       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.696352       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.715917       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.726704       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.753920       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.825574       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.846147       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.873137       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.919373       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.950877       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.990897       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.057712       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.061044       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.134172       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.149907       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.184688       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.319357       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.330313       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.485003       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.524071       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.768436       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4] <==
	E0818 20:16:53.335308       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:16:53.780369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:17:23.341692       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:17:23.788970       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:17:53.348104       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:17:53.797966       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:18:05.551803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-291295"
	E0818 20:18:23.354127       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:18:23.809449       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:18:53.362147       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:18:53.819455       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:18:59.933720       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="341.732µs"
	I0818 20:19:13.929155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="112.469µs"
	E0818 20:19:23.368700       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:19:23.826951       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:19:53.376461       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:19:53.834772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:20:23.383018       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:20:23.842093       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:20:53.389939       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:20:53.850891       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:21:23.397655       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:21:23.859948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:21:53.404976       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:21:53.868138       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 20:12:55.214976       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 20:12:55.244211       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.125"]
	E0818 20:12:55.244311       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 20:12:55.475986       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 20:12:55.476024       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 20:12:55.476060       1 server_linux.go:169] "Using iptables Proxier"
	I0818 20:12:55.482033       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 20:12:55.482286       1 server.go:483] "Version info" version="v1.31.0"
	I0818 20:12:55.482297       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 20:12:55.485063       1 config.go:197] "Starting service config controller"
	I0818 20:12:55.485100       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 20:12:55.485127       1 config.go:104] "Starting endpoint slice config controller"
	I0818 20:12:55.485131       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 20:12:55.485467       1 config.go:326] "Starting node config controller"
	I0818 20:12:55.485558       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 20:12:55.585784       1 shared_informer.go:320] Caches are synced for node config
	I0818 20:12:55.585849       1 shared_informer.go:320] Caches are synced for service config
	I0818 20:12:55.585872       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289] <==
	W0818 20:12:46.446261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 20:12:46.446272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:46.446483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 20:12:46.446553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:46.446600       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 20:12:46.446629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:46.446684       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 20:12:46.446712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.289979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 20:12:47.290055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.362824       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 20:12:47.362911       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0818 20:12:47.382469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 20:12:47.382577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.395803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0818 20:12:47.395950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.530013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 20:12:47.530472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.530786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 20:12:47.530884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.564388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 20:12:47.564440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.567607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 20:12:47.567984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0818 20:12:50.534239       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 20:20:56 embed-certs-291295 kubelet[2913]: E0818 20:20:56.914436    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:20:59 embed-certs-291295 kubelet[2913]: E0818 20:20:59.086719    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012459086293062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:20:59 embed-certs-291295 kubelet[2913]: E0818 20:20:59.086993    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012459086293062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:07 embed-certs-291295 kubelet[2913]: E0818 20:21:07.913791    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:21:09 embed-certs-291295 kubelet[2913]: E0818 20:21:09.088961    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012469088459269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:09 embed-certs-291295 kubelet[2913]: E0818 20:21:09.089006    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012469088459269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:19 embed-certs-291295 kubelet[2913]: E0818 20:21:19.090877    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012479090284251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:19 embed-certs-291295 kubelet[2913]: E0818 20:21:19.091144    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012479090284251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:21 embed-certs-291295 kubelet[2913]: E0818 20:21:21.912928    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:21:29 embed-certs-291295 kubelet[2913]: E0818 20:21:29.092846    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012489092415757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:29 embed-certs-291295 kubelet[2913]: E0818 20:21:29.093214    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012489092415757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:33 embed-certs-291295 kubelet[2913]: E0818 20:21:33.911697    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:21:39 embed-certs-291295 kubelet[2913]: E0818 20:21:39.095313    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012499094672248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:39 embed-certs-291295 kubelet[2913]: E0818 20:21:39.095785    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012499094672248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:44 embed-certs-291295 kubelet[2913]: E0818 20:21:44.912755    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:21:48 embed-certs-291295 kubelet[2913]: E0818 20:21:48.937100    2913 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 20:21:48 embed-certs-291295 kubelet[2913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 20:21:48 embed-certs-291295 kubelet[2913]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 20:21:48 embed-certs-291295 kubelet[2913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 20:21:48 embed-certs-291295 kubelet[2913]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 20:21:49 embed-certs-291295 kubelet[2913]: E0818 20:21:49.098002    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012509097723037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:49 embed-certs-291295 kubelet[2913]: E0818 20:21:49.098024    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012509097723037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:56 embed-certs-291295 kubelet[2913]: E0818 20:21:56.912630    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:21:59 embed-certs-291295 kubelet[2913]: E0818 20:21:59.100093    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012519099828404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:59 embed-certs-291295 kubelet[2913]: E0818 20:21:59.100547    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012519099828404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090] <==
	I0818 20:12:56.618845       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 20:12:56.628333       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 20:12:56.629906       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 20:12:56.637406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 20:12:56.637640       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-291295_8f582d16-79a9-4289-8f9b-09fa7d6f7eb7!
	I0818 20:12:56.638187       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d9b9ef5-1dd3-496c-8492-d6d91bae983c", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-291295_8f582d16-79a9-4289-8f9b-09fa7d6f7eb7 became leader
	I0818 20:12:56.738235       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-291295_8f582d16-79a9-4289-8f9b-09fa7d6f7eb7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291295 -n embed-certs-291295
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-291295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-q9hsn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-291295 describe pod metrics-server-6867b74b74-q9hsn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-291295 describe pod metrics-server-6867b74b74-q9hsn: exit status 1 (62.546271ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-q9hsn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-291295 describe pod metrics-server-6867b74b74-q9hsn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-944426 -n no-preload-944426
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-18 20:22:45.438176604 +0000 UTC m=+6275.830515874
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944426 -n no-preload-944426
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-944426 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-944426 logs -n 25: (2.093116276s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-944426             | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-868662                  | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-868662 --memory=2200 --alsologtostderr   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-291295            | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-868662 image list                           | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:02 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-852598  | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-247539        | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944426                  | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-291295                 | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:03 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-852598       | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-247539             | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:13 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 20:04:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 20:04:42.787579   74485 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:42.787666   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787673   74485 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:42.787677   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787847   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:42.788352   74485 out.go:352] Setting JSON to false
	I0818 20:04:42.789201   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6427,"bootTime":1724005056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:42.789257   74485 start.go:139] virtualization: kvm guest
	I0818 20:04:42.791538   74485 out.go:177] * [default-k8s-diff-port-852598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:42.793185   74485 notify.go:220] Checking for updates...
	I0818 20:04:42.793204   74485 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:42.794555   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:42.795955   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:42.797158   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:42.798459   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:42.799775   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:42.801373   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:04:42.801763   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.801823   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.816564   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0818 20:04:42.816964   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.817465   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.817486   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.817807   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.818015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.818224   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:42.818511   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.818540   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.832964   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0818 20:04:42.833369   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.833866   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.833895   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.834252   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.834438   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.867522   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:42.868931   74485 start.go:297] selected driver: kvm2
	I0818 20:04:42.868948   74485 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.869074   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:42.869754   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.869835   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:42.884983   74485 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:42.885345   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:42.885408   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:04:42.885421   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:42.885450   74485 start.go:340] cluster config:
	{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.885567   74485 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.887511   74485 out.go:177] * Starting "default-k8s-diff-port-852598" primary control-plane node in "default-k8s-diff-port-852598" cluster
	I0818 20:04:42.011628   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:45.083629   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:42.888803   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:04:42.888828   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:42.888834   74485 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:42.888903   74485 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:42.888913   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 20:04:42.888991   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:04:42.889163   74485 start.go:360] acquireMachinesLock for default-k8s-diff-port-852598: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:04:51.163614   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:54.235770   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:00.315808   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:03.387719   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:09.467686   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:12.539667   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:18.619652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:21.691652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:27.771635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:30.843627   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:36.923644   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:39.995678   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:46.075611   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:49.147665   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:55.227683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:58.299638   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:04.379690   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:07.451735   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:13.531669   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:16.603729   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:22.683639   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:25.755659   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:31.835708   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:34.907693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:40.987635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:44.059673   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:50.139693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:53.211683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:59.291707   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:02.363660   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:08.443634   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:11.515633   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:17.595640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:20.667689   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:26.747640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:29.819663   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:32.823816   73815 start.go:364] duration metric: took 4m30.025550701s to acquireMachinesLock for "embed-certs-291295"
	I0818 20:07:32.823869   73815 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:32.823875   73815 fix.go:54] fixHost starting: 
	I0818 20:07:32.824270   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:32.824306   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:32.839755   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0818 20:07:32.840171   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:32.840614   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:07:32.840632   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:32.840962   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:32.841160   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:32.841303   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:07:32.842786   73815 fix.go:112] recreateIfNeeded on embed-certs-291295: state=Stopped err=<nil>
	I0818 20:07:32.842814   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	W0818 20:07:32.842974   73815 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:32.844743   73815 out.go:177] * Restarting existing kvm2 VM for "embed-certs-291295" ...
	I0818 20:07:32.821304   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:32.821364   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821657   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:07:32.821683   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821904   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:07:32.823683   73711 machine.go:96] duration metric: took 4m37.430465042s to provisionDockerMachine
	I0818 20:07:32.823720   73711 fix.go:56] duration metric: took 4m37.451071449s for fixHost
	I0818 20:07:32.823727   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 4m37.451091077s
	W0818 20:07:32.823754   73711 start.go:714] error starting host: provision: host is not running
	W0818 20:07:32.823846   73711 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0818 20:07:32.823855   73711 start.go:729] Will try again in 5 seconds ...
	I0818 20:07:32.846149   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Start
	I0818 20:07:32.846317   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring networks are active...
	I0818 20:07:32.847049   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network default is active
	I0818 20:07:32.847478   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network mk-embed-certs-291295 is active
	I0818 20:07:32.847854   73815 main.go:141] libmachine: (embed-certs-291295) Getting domain xml...
	I0818 20:07:32.848748   73815 main.go:141] libmachine: (embed-certs-291295) Creating domain...
	I0818 20:07:34.053380   73815 main.go:141] libmachine: (embed-certs-291295) Waiting to get IP...
	I0818 20:07:34.054322   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.054765   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.054850   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.054751   75081 retry.go:31] will retry after 299.809444ms: waiting for machine to come up
	I0818 20:07:34.356537   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.356955   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.357014   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.356932   75081 retry.go:31] will retry after 366.714086ms: waiting for machine to come up
	I0818 20:07:34.725440   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.725885   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.725915   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.725839   75081 retry.go:31] will retry after 427.074526ms: waiting for machine to come up
	I0818 20:07:35.154258   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.154660   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.154682   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.154633   75081 retry.go:31] will retry after 565.117984ms: waiting for machine to come up
	I0818 20:07:35.721302   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.721729   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.721757   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.721686   75081 retry.go:31] will retry after 630.987814ms: waiting for machine to come up
	I0818 20:07:36.354566   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:36.354981   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:36.355016   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:36.354951   75081 retry.go:31] will retry after 697.865559ms: waiting for machine to come up
	I0818 20:07:37.054868   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.055232   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.055260   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.055188   75081 retry.go:31] will retry after 898.995052ms: waiting for machine to come up
	I0818 20:07:37.824187   73711 start.go:360] acquireMachinesLock for no-preload-944426: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:37.955672   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.956089   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.956115   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.956038   75081 retry.go:31] will retry after 1.482185836s: waiting for machine to come up
	I0818 20:07:39.440488   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:39.440838   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:39.440889   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:39.440794   75081 retry.go:31] will retry after 1.695604547s: waiting for machine to come up
	I0818 20:07:41.138708   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:41.139203   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:41.139231   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:41.139166   75081 retry.go:31] will retry after 1.806916927s: waiting for machine to come up
	I0818 20:07:42.947942   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:42.948344   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:42.948402   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:42.948319   75081 retry.go:31] will retry after 2.664923271s: waiting for machine to come up
	I0818 20:07:45.616102   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:45.616454   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:45.616482   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:45.616411   75081 retry.go:31] will retry after 3.460207847s: waiting for machine to come up
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:49.078185   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078646   73815 main.go:141] libmachine: (embed-certs-291295) Found IP for machine: 192.168.39.125
	I0818 20:07:49.078676   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has current primary IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078682   73815 main.go:141] libmachine: (embed-certs-291295) Reserving static IP address...
	I0818 20:07:49.079061   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.079091   73815 main.go:141] libmachine: (embed-certs-291295) Reserved static IP address: 192.168.39.125
	I0818 20:07:49.079112   73815 main.go:141] libmachine: (embed-certs-291295) DBG | skip adding static IP to network mk-embed-certs-291295 - found existing host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"}
	I0818 20:07:49.079132   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Getting to WaitForSSH function...
	I0818 20:07:49.079148   73815 main.go:141] libmachine: (embed-certs-291295) Waiting for SSH to be available...
	I0818 20:07:49.081287   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081592   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.081645   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081761   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH client type: external
	I0818 20:07:49.081788   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa (-rw-------)
	I0818 20:07:49.081823   73815 main.go:141] libmachine: (embed-certs-291295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:07:49.081841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | About to run SSH command:
	I0818 20:07:49.081854   73815 main.go:141] libmachine: (embed-certs-291295) DBG | exit 0
	I0818 20:07:49.207649   73815 main.go:141] libmachine: (embed-certs-291295) DBG | SSH cmd err, output: <nil>: 
	I0818 20:07:49.208007   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetConfigRaw
	I0818 20:07:49.208604   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.211088   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211436   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.211464   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211685   73815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:07:49.211906   73815 machine.go:93] provisionDockerMachine start ...
	I0818 20:07:49.211932   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:49.212156   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.214381   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214696   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.214722   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214838   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.215001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215139   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215264   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.215402   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.215637   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.215650   73815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:07:49.327972   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:07:49.328001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328234   73815 buildroot.go:166] provisioning hostname "embed-certs-291295"
	I0818 20:07:49.328286   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328495   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.331272   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331667   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.331695   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331795   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.331967   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332124   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332235   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.332387   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.332602   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.332620   73815 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-291295 && echo "embed-certs-291295" | sudo tee /etc/hostname
	I0818 20:07:49.457656   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-291295
	
	I0818 20:07:49.457692   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.460362   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460692   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.460724   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460821   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.461040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461269   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461419   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.461593   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.461791   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.461807   73815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-291295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-291295/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-291295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:07:49.580418   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:49.580448   73815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:07:49.580487   73815 buildroot.go:174] setting up certificates
	I0818 20:07:49.580501   73815 provision.go:84] configureAuth start
	I0818 20:07:49.580513   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.580787   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.583435   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.583801   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.583825   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.584097   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.586253   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586572   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.586606   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586700   73815 provision.go:143] copyHostCerts
	I0818 20:07:49.586764   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:07:49.586786   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:07:49.586863   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:07:49.586984   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:07:49.586994   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:07:49.587034   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:07:49.587134   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:07:49.587144   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:07:49.587182   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:07:49.587257   73815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-291295 san=[127.0.0.1 192.168.39.125 embed-certs-291295 localhost minikube]
	I0818 20:07:49.844689   73815 provision.go:177] copyRemoteCerts
	I0818 20:07:49.844745   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:07:49.844767   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.847172   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.847517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847700   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.847898   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.848060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.848210   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:49.933798   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:07:49.957958   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:07:49.981551   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:07:50.004238   73815 provision.go:87] duration metric: took 423.726052ms to configureAuth
	I0818 20:07:50.004263   73815 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:07:50.004431   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:07:50.004494   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.006759   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007031   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.007059   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007217   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.007437   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007603   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007729   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.007894   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.008058   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.008072   73815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:07:50.287001   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:07:50.287027   73815 machine.go:96] duration metric: took 1.075103653s to provisionDockerMachine
	I0818 20:07:50.287038   73815 start.go:293] postStartSetup for "embed-certs-291295" (driver="kvm2")
	I0818 20:07:50.287047   73815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:07:50.287067   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.287451   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:07:50.287478   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.290150   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290493   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.290515   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290727   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.290911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.291096   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.291233   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.379621   73815 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:07:50.388749   73815 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:07:50.388772   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:07:50.388844   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:07:50.388927   73815 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:07:50.389046   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:07:50.398957   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:50.422817   73815 start.go:296] duration metric: took 135.767247ms for postStartSetup
	I0818 20:07:50.422859   73815 fix.go:56] duration metric: took 17.598982329s for fixHost
	I0818 20:07:50.422886   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.425514   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.425899   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.425926   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.426113   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.426332   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426505   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426623   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.426798   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.427018   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.427033   73815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:07:50.540087   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011670.500173623
	
	I0818 20:07:50.540113   73815 fix.go:216] guest clock: 1724011670.500173623
	I0818 20:07:50.540122   73815 fix.go:229] Guest: 2024-08-18 20:07:50.500173623 +0000 UTC Remote: 2024-08-18 20:07:50.42286401 +0000 UTC m=+287.764343419 (delta=77.309613ms)
	I0818 20:07:50.540140   73815 fix.go:200] guest clock delta is within tolerance: 77.309613ms
	I0818 20:07:50.540145   73815 start.go:83] releasing machines lock for "embed-certs-291295", held for 17.716293127s
	I0818 20:07:50.540172   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.540462   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:50.543280   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543688   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.543721   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544386   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544639   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544698   73815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:07:50.544749   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.544889   73815 ssh_runner.go:195] Run: cat /version.json
	I0818 20:07:50.544913   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.547481   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547813   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.547841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547867   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547962   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548165   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548281   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.548307   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.548340   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548431   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548515   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.548576   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548701   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548874   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.628660   73815 ssh_runner.go:195] Run: systemctl --version
	I0818 20:07:50.653164   73815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:07:50.799158   73815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:07:50.805063   73815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:07:50.805134   73815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:07:50.820796   73815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:07:50.820822   73815 start.go:495] detecting cgroup driver to use...
	I0818 20:07:50.820901   73815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:07:50.837574   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:07:50.851913   73815 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:07:50.851981   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:07:50.865595   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:07:50.879240   73815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:07:50.990057   73815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:07:51.151540   73815 docker.go:233] disabling docker service ...
	I0818 20:07:51.151618   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:07:51.166231   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:07:51.180949   73815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:07:51.329174   73815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:07:51.460564   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:07:51.474929   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:07:51.494510   73815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:07:51.494573   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.507465   73815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:07:51.507533   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.519207   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.535742   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.551186   73815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:07:51.563233   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.574714   73815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.597948   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.609883   73815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:07:51.621040   73815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:07:51.621115   73815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:07:51.636305   73815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:07:51.646895   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:51.781890   73815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:07:51.927722   73815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:07:51.927799   73815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:07:51.932918   73815 start.go:563] Will wait 60s for crictl version
	I0818 20:07:51.933006   73815 ssh_runner.go:195] Run: which crictl
	I0818 20:07:51.936917   73815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:07:51.981063   73815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:07:51.981141   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.008566   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.041182   73815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:07:52.042348   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:52.045196   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045559   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:52.045588   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045764   73815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 20:07:52.050188   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:52.065105   73815 kubeadm.go:883] updating cluster {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:07:52.065244   73815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:07:52.065300   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:52.108608   73815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:07:52.108687   73815 ssh_runner.go:195] Run: which lz4
	I0818 20:07:52.112897   73815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:07:52.117388   73815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:07:52.117421   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:53.532527   73815 crio.go:462] duration metric: took 1.419668609s to copy over tarball
	I0818 20:07:53.532605   73815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:07:55.664780   73815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132141788s)
	I0818 20:07:55.664810   73815 crio.go:469] duration metric: took 2.132257968s to extract the tarball
	I0818 20:07:55.664820   73815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:07:55.702662   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:55.745782   73815 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:07:55.745801   73815 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:07:55.745809   73815 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0818 20:07:55.745921   73815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-291295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:07:55.745985   73815 ssh_runner.go:195] Run: crio config
	I0818 20:07:55.788458   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:07:55.788484   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:07:55.788503   73815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:07:55.788537   73815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-291295 NodeName:embed-certs-291295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:07:55.788723   73815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-291295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:07:55.788800   73815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:07:55.798787   73815 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:07:55.798860   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:07:55.808532   73815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0818 20:07:55.825731   73815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:07:55.842287   73815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0818 20:07:55.860058   73815 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0818 20:07:55.864007   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:55.876297   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:55.999076   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:07:56.015305   73815 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295 for IP: 192.168.39.125
	I0818 20:07:56.015325   73815 certs.go:194] generating shared ca certs ...
	I0818 20:07:56.015339   73815 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:07:56.015505   73815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:07:56.015548   73815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:07:56.015557   73815 certs.go:256] generating profile certs ...
	I0818 20:07:56.015633   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/client.key
	I0818 20:07:56.015689   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key.a8bddcfe
	I0818 20:07:56.015732   73815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key
	I0818 20:07:56.015846   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:07:56.015885   73815 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:07:56.015898   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:07:56.015953   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:07:56.015979   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:07:56.015999   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:07:56.016036   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:56.016660   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:07:56.044323   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:07:56.079231   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:07:56.111738   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:07:56.134817   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0818 20:07:56.160819   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:07:56.185806   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:07:56.210116   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:07:56.234185   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:07:56.256896   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:07:56.279505   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:07:56.302178   73815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:07:56.318931   73815 ssh_runner.go:195] Run: openssl version
	I0818 20:07:56.324865   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:07:56.336272   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340825   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340872   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.346515   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:07:56.357471   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:07:56.368211   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372600   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372662   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.378152   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:07:56.388868   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:07:56.399297   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403628   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403663   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.409041   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:07:56.419342   73815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:07:56.423757   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:07:56.429341   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:07:56.435012   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:07:56.440752   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:07:56.446305   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:07:56.452219   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:07:56.458004   73815 kubeadm.go:392] StartCluster: {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:07:56.458133   73815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:07:56.458181   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.495200   73815 cri.go:89] found id: ""
	I0818 20:07:56.495281   73815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:07:56.505834   73815 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:07:56.505854   73815 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:07:56.505903   73815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:07:56.516025   73815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:07:56.516962   73815 kubeconfig.go:125] found "embed-certs-291295" server: "https://192.168.39.125:8443"
	I0818 20:07:56.518789   73815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:07:56.528513   73815 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0818 20:07:56.528541   73815 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:07:56.528556   73815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:07:56.528612   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.568091   73815 cri.go:89] found id: ""
	I0818 20:07:56.568161   73815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:07:56.584012   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:07:56.593697   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:07:56.593712   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:07:56.593746   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:07:56.603071   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:07:56.603112   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:07:56.612422   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:07:56.621194   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:07:56.621243   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:07:56.630252   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.640086   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:07:56.640138   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.649323   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:07:56.658055   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:07:56.658110   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:07:56.667134   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:07:56.676460   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.783806   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.515850   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:07:57.710065   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.780213   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.854365   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:07:57.854458   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.355246   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.854602   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.355211   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.854991   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.354593   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.368818   73815 api_server.go:72] duration metric: took 2.514473789s to wait for apiserver process to appear ...
	I0818 20:08:00.368844   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:00.368866   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.832413   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:02.832449   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:02.832466   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.924768   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.924804   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:02.924820   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.929839   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.929869   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.369350   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.373766   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.373796   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.869333   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.874889   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.874919   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:04.369187   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:04.374739   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:08:04.383736   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:04.383764   73815 api_server.go:131] duration metric: took 4.014913233s to wait for apiserver health ...
	I0818 20:08:04.383773   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:08:04.383779   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:04.385486   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:04.386800   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:04.403476   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:04.422354   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:04.435181   73815 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:04.435222   73815 system_pods.go:61] "coredns-6f6b679f8f-wvd9k" [02369649-1565-437d-8b19-a67adfe13d45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:04.435237   73815 system_pods.go:61] "etcd-embed-certs-291295" [1e9f0b7d-bb65-4867-821e-b9af34338b3e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:04.435246   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [bb884a00-e058-4348-bc6a-427c64f4c68d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:04.435261   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [3a359998-cdb6-46ef-a018-e03e70cb33e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:04.435269   73815 system_pods.go:61] "kube-proxy-5fjm2" [bb15b1d9-8221-473a-b0c7-8c65b3b18bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 20:08:04.435276   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [4ed7725a-b0e6-4bc0-b0bd-913eb15fd4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:04.435287   73815 system_pods.go:61] "metrics-server-6867b74b74-g2kt7" [c23cc238-51f0-402c-a0c1-4aecc020d845] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:04.435294   73815 system_pods.go:61] "storage-provisioner" [2dcad3a1-15f0-41b9-8398-5a6e2d8763b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 20:08:04.435303   73815 system_pods.go:74] duration metric: took 12.928394ms to wait for pod list to return data ...
	I0818 20:08:04.435314   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:04.439127   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:04.439150   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:04.439161   73815 node_conditions.go:105] duration metric: took 3.84281ms to run NodePressure ...
	I0818 20:08:04.439176   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:04.720705   73815 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726814   73815 kubeadm.go:739] kubelet initialised
	I0818 20:08:04.726835   73815 kubeadm.go:740] duration metric: took 6.104356ms waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726843   73815 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:04.736000   73815 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.741473   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741509   73815 pod_ready.go:82] duration metric: took 5.472852ms for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.741523   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741534   73815 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.749841   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749872   73815 pod_ready.go:82] duration metric: took 8.326743ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.749883   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749891   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.756947   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.756997   73815 pod_ready.go:82] duration metric: took 7.079861ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.757011   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.757019   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.825829   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825865   73815 pod_ready.go:82] duration metric: took 68.834734ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.825878   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825888   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225761   73815 pod_ready.go:93] pod "kube-proxy-5fjm2" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:05.225786   73815 pod_ready.go:82] duration metric: took 399.888138ms for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225796   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:07.232250   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.744305   74485 start.go:364] duration metric: took 3m27.85511004s to acquireMachinesLock for "default-k8s-diff-port-852598"
	I0818 20:08:10.744365   74485 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:10.744384   74485 fix.go:54] fixHost starting: 
	I0818 20:08:10.744751   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:10.744791   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:10.764317   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0818 20:08:10.764799   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:10.765323   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:08:10.765349   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:10.765723   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:10.765929   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:10.766110   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:08:10.767735   74485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-852598: state=Stopped err=<nil>
	I0818 20:08:10.767763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	W0818 20:08:10.767931   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:10.770197   74485 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-852598" ...
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:09.234086   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:11.732954   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.771461   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Start
	I0818 20:08:10.771638   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring networks are active...
	I0818 20:08:10.772332   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network default is active
	I0818 20:08:10.772645   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network mk-default-k8s-diff-port-852598 is active
	I0818 20:08:10.773119   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Getting domain xml...
	I0818 20:08:10.773840   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Creating domain...
	I0818 20:08:12.058765   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting to get IP...
	I0818 20:08:12.059745   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060236   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.060152   75353 retry.go:31] will retry after 227.793826ms: waiting for machine to come up
	I0818 20:08:12.289622   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290038   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290061   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.290013   75353 retry.go:31] will retry after 288.501286ms: waiting for machine to come up
	I0818 20:08:12.580672   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581158   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.581120   75353 retry.go:31] will retry after 460.489481ms: waiting for machine to come up
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:13.736819   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:14.732761   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:14.732783   73815 pod_ready.go:82] duration metric: took 9.506980075s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:14.732792   73815 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:16.739855   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:13.042839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043444   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043475   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.043413   75353 retry.go:31] will retry after 542.076458ms: waiting for machine to come up
	I0818 20:08:13.586675   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587296   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587326   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.587216   75353 retry.go:31] will retry after 553.588704ms: waiting for machine to come up
	I0818 20:08:14.142076   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142714   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142737   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.142616   75353 retry.go:31] will retry after 852.179264ms: waiting for machine to come up
	I0818 20:08:14.996732   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997226   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997258   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.997175   75353 retry.go:31] will retry after 732.180291ms: waiting for machine to come up
	I0818 20:08:15.731247   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731741   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731771   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:15.731699   75353 retry.go:31] will retry after 1.456328641s: waiting for machine to come up
	I0818 20:08:17.189586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190071   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:17.189997   75353 retry.go:31] will retry after 1.632315907s: waiting for machine to come up
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:18.740885   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:21.239526   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:18.824458   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824827   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824859   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:18.824772   75353 retry.go:31] will retry after 2.077122736s: waiting for machine to come up
	I0818 20:08:20.903734   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904176   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904203   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:20.904139   75353 retry.go:31] will retry after 1.975638775s: waiting for machine to come up
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.239765   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:25.739127   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:22.882020   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882511   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:22.882450   75353 retry.go:31] will retry after 3.362090127s: waiting for machine to come up
	I0818 20:08:26.246148   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246523   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246547   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:26.246479   75353 retry.go:31] will retry after 3.188423251s: waiting for machine to come up
	I0818 20:08:30.732227   73711 start.go:364] duration metric: took 52.90798246s to acquireMachinesLock for "no-preload-944426"
	I0818 20:08:30.732291   73711 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:30.732302   73711 fix.go:54] fixHost starting: 
	I0818 20:08:30.732702   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:30.732738   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:30.749873   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0818 20:08:30.750371   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:30.750922   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:08:30.750951   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:30.751323   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:30.751547   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:30.751748   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:08:30.753437   73711 fix.go:112] recreateIfNeeded on no-preload-944426: state=Stopped err=<nil>
	I0818 20:08:30.753460   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	W0818 20:08:30.753623   73711 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:30.756026   73711 out.go:177] * Restarting existing kvm2 VM for "no-preload-944426" ...
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.438706   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439209   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Found IP for machine: 192.168.72.111
	I0818 20:08:29.439225   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserving static IP address...
	I0818 20:08:29.439241   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has current primary IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439712   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.439740   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | skip adding static IP to network mk-default-k8s-diff-port-852598 - found existing host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"}
	I0818 20:08:29.439754   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserved static IP address: 192.168.72.111
	I0818 20:08:29.439769   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for SSH to be available...
	I0818 20:08:29.439786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Getting to WaitForSSH function...
	I0818 20:08:29.442039   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442351   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.442378   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442515   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH client type: external
	I0818 20:08:29.442545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa (-rw-------)
	I0818 20:08:29.442569   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:29.442580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | About to run SSH command:
	I0818 20:08:29.442592   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | exit 0
	I0818 20:08:29.567586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:29.567935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetConfigRaw
	I0818 20:08:29.568553   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.570763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571150   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.571183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571367   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:08:29.571585   74485 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:29.571608   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:29.571839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.574102   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574560   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.574598   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574753   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.574920   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575219   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.575421   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.575610   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.575623   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:29.683677   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:29.683705   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.683980   74485 buildroot.go:166] provisioning hostname "default-k8s-diff-port-852598"
	I0818 20:08:29.684010   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.684210   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.687062   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687490   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.687518   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687656   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.687817   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.687954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.688105   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.688270   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.688444   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.688457   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-852598 && echo "default-k8s-diff-port-852598" | sudo tee /etc/hostname
	I0818 20:08:29.810790   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-852598
	
	I0818 20:08:29.810821   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.813448   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.813868   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.814159   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814322   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814457   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.814613   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.814821   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.814847   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-852598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-852598/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-852598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:29.934730   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:29.934762   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:29.934818   74485 buildroot.go:174] setting up certificates
	I0818 20:08:29.934834   74485 provision.go:84] configureAuth start
	I0818 20:08:29.934848   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.935133   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.938004   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938365   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.938385   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938612   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.940910   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941267   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.941298   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941376   74485 provision.go:143] copyHostCerts
	I0818 20:08:29.941429   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:29.941446   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:29.941498   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:29.941583   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:29.941591   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:29.941609   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:29.941657   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:29.941664   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:29.941683   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:29.941726   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-852598 san=[127.0.0.1 192.168.72.111 default-k8s-diff-port-852598 localhost minikube]
	I0818 20:08:30.047223   74485 provision.go:177] copyRemoteCerts
	I0818 20:08:30.047284   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:30.047310   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.049891   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050165   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.050195   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050394   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.050580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.050750   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.050910   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.133873   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:30.158887   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0818 20:08:30.183930   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 20:08:30.208851   74485 provision.go:87] duration metric: took 274.002401ms to configureAuth
	I0818 20:08:30.208888   74485 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:30.209075   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:30.209144   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.211913   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212274   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.212305   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212521   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.212718   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.212897   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.213060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.213313   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.213531   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.213564   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:30.490496   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:30.490524   74485 machine.go:96] duration metric: took 918.924484ms to provisionDockerMachine
	I0818 20:08:30.490541   74485 start.go:293] postStartSetup for "default-k8s-diff-port-852598" (driver="kvm2")
	I0818 20:08:30.490555   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:30.490576   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.490879   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:30.490904   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.493538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.493863   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.493894   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.494015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.494211   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.494367   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.494513   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.582020   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:30.586488   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:30.586510   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:30.586568   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:30.586656   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:30.586743   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:30.595907   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:30.619808   74485 start.go:296] duration metric: took 129.254668ms for postStartSetup
	I0818 20:08:30.619842   74485 fix.go:56] duration metric: took 19.875457987s for fixHost
	I0818 20:08:30.619861   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.622487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622802   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.622836   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.623181   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623338   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623489   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.623663   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.623819   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.623829   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:30.732011   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011710.692571104
	
	I0818 20:08:30.732033   74485 fix.go:216] guest clock: 1724011710.692571104
	I0818 20:08:30.732040   74485 fix.go:229] Guest: 2024-08-18 20:08:30.692571104 +0000 UTC Remote: 2024-08-18 20:08:30.619845545 +0000 UTC m=+227.865652589 (delta=72.725559ms)
	I0818 20:08:30.732088   74485 fix.go:200] guest clock delta is within tolerance: 72.725559ms
	I0818 20:08:30.732098   74485 start.go:83] releasing machines lock for "default-k8s-diff-port-852598", held for 19.987759602s
	I0818 20:08:30.732126   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.732380   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:30.735249   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735696   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.735724   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735987   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736665   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736886   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736961   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:30.737002   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.737212   74485 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:30.737240   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.740016   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740246   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740447   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740470   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740646   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740650   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740739   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740949   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740956   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741415   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741427   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741608   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.741699   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.821128   74485 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:30.848919   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:30.997885   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:31.004578   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:31.004656   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:31.023770   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:31.023801   74485 start.go:495] detecting cgroup driver to use...
	I0818 20:08:31.023873   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:31.040507   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:31.054848   74485 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:31.054901   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:31.069584   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:31.089532   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:31.214560   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:31.394507   74485 docker.go:233] disabling docker service ...
	I0818 20:08:31.394571   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:31.411295   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:31.427312   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:31.547148   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:31.669942   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:31.686214   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:31.711412   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:31.711474   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.723281   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:31.723346   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.735488   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.748029   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.762456   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:31.779045   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.793816   74485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.816892   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.829236   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:31.842943   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:31.843000   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:31.858422   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:31.870179   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:32.003783   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:32.160300   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:32.160368   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:32.165424   74485 start.go:563] Will wait 60s for crictl version
	I0818 20:08:32.165472   74485 ssh_runner.go:195] Run: which crictl
	I0818 20:08:32.169268   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:32.211667   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:32.211758   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.242366   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.272343   74485 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:27.739698   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:30.239242   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.240089   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.273652   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:32.277017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277362   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:32.277395   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277654   74485 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:32.282225   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:32.306870   74485 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:32.306980   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:32.307040   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:32.350393   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:32.350473   74485 ssh_runner.go:195] Run: which lz4
	I0818 20:08:32.355129   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:32.359816   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:32.359839   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:08:30.757329   73711 main.go:141] libmachine: (no-preload-944426) Calling .Start
	I0818 20:08:30.757514   73711 main.go:141] libmachine: (no-preload-944426) Ensuring networks are active...
	I0818 20:08:30.758286   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network default is active
	I0818 20:08:30.758667   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network mk-no-preload-944426 is active
	I0818 20:08:30.759084   73711 main.go:141] libmachine: (no-preload-944426) Getting domain xml...
	I0818 20:08:30.759889   73711 main.go:141] libmachine: (no-preload-944426) Creating domain...
	I0818 20:08:32.064235   73711 main.go:141] libmachine: (no-preload-944426) Waiting to get IP...
	I0818 20:08:32.065149   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.065617   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.065693   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.065614   75550 retry.go:31] will retry after 223.046315ms: waiting for machine to come up
	I0818 20:08:32.290000   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.290486   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.290517   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.290460   75550 retry.go:31] will retry after 359.595476ms: waiting for machine to come up
	I0818 20:08:32.652293   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.652922   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.652953   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.652891   75550 retry.go:31] will retry after 355.131428ms: waiting for machine to come up
	I0818 20:08:33.009174   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.009664   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.009692   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.009620   75550 retry.go:31] will retry after 433.765107ms: waiting for machine to come up
	I0818 20:08:33.445297   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.446028   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.446057   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.446005   75550 retry.go:31] will retry after 547.853366ms: waiting for machine to come up
	I0818 20:08:33.995808   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.996537   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.996569   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.996500   75550 retry.go:31] will retry after 830.882652ms: waiting for machine to come up
	I0818 20:08:34.828636   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:34.829139   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:34.829169   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:34.829088   75550 retry.go:31] will retry after 1.034176215s: waiting for machine to come up
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.240826   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:36.740440   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:33.831827   74485 crio.go:462] duration metric: took 1.476738272s to copy over tarball
	I0818 20:08:33.831892   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:36.080107   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.24818669s)
	I0818 20:08:36.080141   74485 crio.go:469] duration metric: took 2.248285769s to extract the tarball
	I0818 20:08:36.080159   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:36.120912   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:36.170431   74485 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:08:36.170455   74485 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:08:36.170463   74485 kubeadm.go:934] updating node { 192.168.72.111 8444 v1.31.0 crio true true} ...
	I0818 20:08:36.170563   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-852598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:36.170628   74485 ssh_runner.go:195] Run: crio config
	I0818 20:08:36.215464   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:36.215491   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:36.215504   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:36.215528   74485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-852598 NodeName:default-k8s-diff-port-852598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:08:36.215652   74485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-852598"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:36.215718   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:08:36.227163   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:36.227254   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:36.237577   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0818 20:08:36.254898   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:36.273530   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0818 20:08:36.290824   74485 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:36.294542   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:36.306822   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:36.443673   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:36.461205   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598 for IP: 192.168.72.111
	I0818 20:08:36.461232   74485 certs.go:194] generating shared ca certs ...
	I0818 20:08:36.461252   74485 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:36.461420   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:36.461492   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:36.461505   74485 certs.go:256] generating profile certs ...
	I0818 20:08:36.461621   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/client.key
	I0818 20:08:36.461717   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key.44a0f5ad
	I0818 20:08:36.461783   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key
	I0818 20:08:36.461930   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:36.461983   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:36.461998   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:36.462026   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:36.462077   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:36.462112   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:36.462167   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:36.462916   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:36.512610   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:36.558616   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:36.595755   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:36.638264   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 20:08:36.669336   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:08:36.692480   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:36.717235   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:08:36.742220   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:36.765505   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:36.789279   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:36.813777   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:36.831256   74485 ssh_runner.go:195] Run: openssl version
	I0818 20:08:36.837184   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:36.848123   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853030   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853089   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.859016   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:36.871084   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:36.882581   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.888943   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.889008   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.896841   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:36.911762   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:36.923029   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.927982   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.928039   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.934165   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:36.946794   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:36.951686   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:36.957905   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:36.964071   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:36.970369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:36.976369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:36.982386   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:36.988286   74485 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:36.988382   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:36.988433   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.036383   74485 cri.go:89] found id: ""
	I0818 20:08:37.036472   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:37.047135   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:37.047159   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:37.047204   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:37.058133   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:37.059236   74485 kubeconfig.go:125] found "default-k8s-diff-port-852598" server: "https://192.168.72.111:8444"
	I0818 20:08:37.061368   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:37.072922   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0818 20:08:37.072961   74485 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:37.072975   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:37.073035   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.120622   74485 cri.go:89] found id: ""
	I0818 20:08:37.120713   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:37.138564   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:37.149091   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:37.149114   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:37.149167   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:08:37.160298   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:37.160364   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:37.170717   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:08:37.180261   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:37.180337   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:37.190466   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.200331   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:37.200407   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.210729   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:08:37.220302   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:37.220379   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:37.230616   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:37.241303   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:37.365964   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:35.865644   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:35.866148   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:35.866176   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:35.866094   75550 retry.go:31] will retry after 1.30047863s: waiting for machine to come up
	I0818 20:08:37.168446   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:37.168947   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:37.168985   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:37.168886   75550 retry.go:31] will retry after 1.143148547s: waiting for machine to come up
	I0818 20:08:38.314142   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:38.314622   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:38.314645   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:38.314568   75550 retry.go:31] will retry after 2.106630797s: waiting for machine to come up
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.240817   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:41.741780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:38.322305   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.523945   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.627637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.794218   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:38.794298   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.295075   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.795095   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.810749   74485 api_server.go:72] duration metric: took 1.016560665s to wait for apiserver process to appear ...
	I0818 20:08:39.810778   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:39.810802   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:39.811324   74485 api_server.go:269] stopped: https://192.168.72.111:8444/healthz: Get "https://192.168.72.111:8444/healthz": dial tcp 192.168.72.111:8444: connect: connection refused
	I0818 20:08:40.311081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.309160   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:42.309190   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:42.309206   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.364083   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.364123   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:42.364148   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.370890   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.370918   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:40.423364   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:40.423886   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:40.423909   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:40.423851   75550 retry.go:31] will retry after 2.350918177s: waiting for machine to come up
	I0818 20:08:42.776801   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:42.777407   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:42.777440   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:42.777361   75550 retry.go:31] will retry after 3.529824243s: waiting for machine to come up
	I0818 20:08:42.815322   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.823702   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.823738   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.311540   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.317503   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.317537   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.810955   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.816976   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.817005   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.311718   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.316009   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.316038   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.811634   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.816069   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.816095   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.311732   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.317099   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:45.317122   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.811063   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.815319   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:08:45.821699   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:45.821728   74485 api_server.go:131] duration metric: took 6.010942001s to wait for apiserver health ...
	I0818 20:08:45.821739   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:45.821774   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:45.823968   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.239827   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:46.240539   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:45.825235   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:45.836398   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:45.854746   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:45.866305   74485 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:45.866335   74485 system_pods.go:61] "coredns-6f6b679f8f-zfdn9" [8ed412a0-912d-4619-a2d8-2378f921037b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:45.866344   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [efa18356-f8dd-4fe4-acc6-59f859e7becf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:45.866351   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [b92f2056-c5b6-4a2f-8519-a83b2350866f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:45.866359   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [7eb6a474-891d-442e-bd85-4ca766312f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:45.866365   74485 system_pods.go:61] "kube-proxy-h8bpj" [472e231d-df71-44d6-8873-23d7e43d43d2] Running
	I0818 20:08:45.866375   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [43dccb14-0125-4d48-9537-8a87c865b586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:45.866381   74485 system_pods.go:61] "metrics-server-6867b74b74-brqj6" [de1c0894-2b42-4728-bf63-bea36c5aa0d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:45.866387   74485 system_pods.go:61] "storage-provisioner" [41499d9e-d3cf-4dbc-9464-998a1f2c6186] Running
	I0818 20:08:45.866395   74485 system_pods.go:74] duration metric: took 11.62616ms to wait for pod list to return data ...
	I0818 20:08:45.866411   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:45.870540   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:45.870564   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:45.870578   74485 node_conditions.go:105] duration metric: took 4.15805ms to run NodePressure ...
	I0818 20:08:45.870597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:46.138555   74485 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142738   74485 kubeadm.go:739] kubelet initialised
	I0818 20:08:46.142758   74485 kubeadm.go:740] duration metric: took 4.173219ms waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142765   74485 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:46.147199   74485 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.151726   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151751   74485 pod_ready.go:82] duration metric: took 4.528706ms for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.151762   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151770   74485 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.155962   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.155984   74485 pod_ready.go:82] duration metric: took 4.203038ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.155996   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.156002   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.159739   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159759   74485 pod_ready.go:82] duration metric: took 3.749616ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.159769   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159777   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.309056   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:46.309441   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:46.309470   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:46.309395   75550 retry.go:31] will retry after 3.741295193s: waiting for machine to come up
	I0818 20:08:50.052617   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053049   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has current primary IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053070   73711 main.go:141] libmachine: (no-preload-944426) Found IP for machine: 192.168.61.228
	I0818 20:08:50.053083   73711 main.go:141] libmachine: (no-preload-944426) Reserving static IP address...
	I0818 20:08:50.053446   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.053467   73711 main.go:141] libmachine: (no-preload-944426) Reserved static IP address: 192.168.61.228
	I0818 20:08:50.053484   73711 main.go:141] libmachine: (no-preload-944426) DBG | skip adding static IP to network mk-no-preload-944426 - found existing host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"}
	I0818 20:08:50.053498   73711 main.go:141] libmachine: (no-preload-944426) DBG | Getting to WaitForSSH function...
	I0818 20:08:50.053510   73711 main.go:141] libmachine: (no-preload-944426) Waiting for SSH to be available...
	I0818 20:08:50.055459   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055790   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.055822   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055911   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH client type: external
	I0818 20:08:50.055939   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa (-rw-------)
	I0818 20:08:50.055971   73711 main.go:141] libmachine: (no-preload-944426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:50.055986   73711 main.go:141] libmachine: (no-preload-944426) DBG | About to run SSH command:
	I0818 20:08:50.055998   73711 main.go:141] libmachine: (no-preload-944426) DBG | exit 0
	I0818 20:08:50.175717   73711 main.go:141] libmachine: (no-preload-944426) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:50.176077   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetConfigRaw
	I0818 20:08:50.176705   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.179072   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179455   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.179486   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179712   73711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:08:50.179900   73711 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:50.179923   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:50.180128   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.182300   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182679   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.182707   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182822   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.183009   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183138   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183292   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.183455   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.183613   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.183623   73711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.739015   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.741282   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:48.165270   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.166500   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:52.667585   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.284037   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:50.284069   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284354   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:08:50.284383   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284503   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.287412   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287774   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.287814   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287965   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.288164   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288352   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288509   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.288669   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.288869   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.288889   73711 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944426 && echo "no-preload-944426" | sudo tee /etc/hostname
	I0818 20:08:50.407844   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944426
	
	I0818 20:08:50.407877   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.410740   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411115   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.411156   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411402   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.411612   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411760   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411869   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.412073   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.412277   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.412299   73711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944426/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:50.521359   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:50.521388   73711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:50.521456   73711 buildroot.go:174] setting up certificates
	I0818 20:08:50.521467   73711 provision.go:84] configureAuth start
	I0818 20:08:50.521481   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.521824   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.524572   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.524975   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.525002   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.525211   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.527350   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527669   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.527697   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527790   73711 provision.go:143] copyHostCerts
	I0818 20:08:50.527856   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:50.527872   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:50.527924   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:50.528038   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:50.528047   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:50.528065   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:50.528119   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:50.528126   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:50.528143   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:50.528192   73711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.no-preload-944426 san=[127.0.0.1 192.168.61.228 localhost minikube no-preload-944426]
	I0818 20:08:50.740892   73711 provision.go:177] copyRemoteCerts
	I0818 20:08:50.740964   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:50.740991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.743676   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744029   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.744059   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744260   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.744494   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.744681   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.744848   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:50.826364   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:50.858459   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:08:50.890910   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:50.918703   73711 provision.go:87] duration metric: took 397.222917ms to configureAuth
	I0818 20:08:50.918730   73711 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:50.918947   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:50.919029   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.922219   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922549   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.922573   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922762   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.922991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923166   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923300   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.923475   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.923683   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.923700   73711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:51.193561   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:51.193588   73711 machine.go:96] duration metric: took 1.013672792s to provisionDockerMachine
	I0818 20:08:51.193603   73711 start.go:293] postStartSetup for "no-preload-944426" (driver="kvm2")
	I0818 20:08:51.193616   73711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:51.193660   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.194032   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:51.194060   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.196422   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196712   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.196747   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196900   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.197046   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.197157   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.197325   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.279007   73711 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:51.283324   73711 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:51.283344   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:51.283424   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:51.283524   73711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:51.283641   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:51.293489   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:51.317415   73711 start.go:296] duration metric: took 123.797891ms for postStartSetup
	I0818 20:08:51.317455   73711 fix.go:56] duration metric: took 20.58515233s for fixHost
	I0818 20:08:51.317479   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.320161   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320452   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.320481   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320667   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.320853   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321027   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321171   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.321322   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:51.321505   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:51.321517   73711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:51.420193   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011731.395088538
	
	I0818 20:08:51.420216   73711 fix.go:216] guest clock: 1724011731.395088538
	I0818 20:08:51.420223   73711 fix.go:229] Guest: 2024-08-18 20:08:51.395088538 +0000 UTC Remote: 2024-08-18 20:08:51.317459873 +0000 UTC m=+356.082724848 (delta=77.628665ms)
	I0818 20:08:51.420240   73711 fix.go:200] guest clock delta is within tolerance: 77.628665ms
	I0818 20:08:51.420256   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 20.687989837s
	I0818 20:08:51.420273   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.420534   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:51.423567   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.423861   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.423888   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.424052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424528   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424690   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424777   73711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:51.424825   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.424916   73711 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:51.424945   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.427482   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427714   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427786   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.427813   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427962   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428080   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.428109   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.428146   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428283   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428342   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428441   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428532   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.428600   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428707   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.528038   73711 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:51.534231   73711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:51.683823   73711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:51.690823   73711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:51.690901   73711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:51.707356   73711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:51.707389   73711 start.go:495] detecting cgroup driver to use...
	I0818 20:08:51.707459   73711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:51.723884   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:51.737661   73711 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:51.737715   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:51.751187   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:51.764367   73711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:51.881664   73711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:52.022183   73711 docker.go:233] disabling docker service ...
	I0818 20:08:52.022250   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:52.037108   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:52.050404   73711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:52.190167   73711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:52.325569   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:52.339546   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:52.358427   73711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:52.358487   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.369570   73711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:52.369629   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.382786   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.396845   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.407797   73711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:52.418649   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.428822   73711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.445799   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.455730   73711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:52.464898   73711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:52.464951   73711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:52.477249   73711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:52.487204   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:52.608922   73711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:52.753849   73711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:52.753918   73711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:52.759116   73711 start.go:563] Will wait 60s for crictl version
	I0818 20:08:52.759175   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:52.763674   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:52.806016   73711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:52.806106   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.833670   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.864310   73711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:52.865447   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:52.868265   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868667   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:52.868699   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868900   73711 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:52.873656   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:52.887328   73711 kubeadm.go:883] updating cluster {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:52.887505   73711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:52.887553   73711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:52.923999   73711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:52.924025   73711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:52.924090   73711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.924097   73711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.924113   73711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.924147   73711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.924216   73711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.924239   73711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:52.924305   73711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.924390   73711 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.925984   73711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.926002   73711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.925994   73711 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 20:08:52.926011   73711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.926053   73711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.926291   73711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.117679   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157566   73711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0818 20:08:53.157608   73711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157655   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.158464   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.161938   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217317   73711 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0818 20:08:53.217374   73711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.217419   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217427   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.229954   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0818 20:08:53.253154   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.253209   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.261450   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.269598   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.270354   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.270401   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.421994   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 20:08:53.422048   73711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0818 20:08:53.422139   73711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.422182   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.422195   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.422052   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.446061   73711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0818 20:08:53.446101   73711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.446100   73711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0818 20:08:53.446114   73711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0818 20:08:53.446158   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446201   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.446161   73711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.446130   73711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.446250   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446280   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.474921   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.474936   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0818 20:08:53.474953   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474995   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474999   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.505782   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.505904   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.505934   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.799739   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.740857   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.167350   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.168652   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.666744   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.666779   74485 pod_ready.go:82] duration metric: took 11.506987195s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.666802   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671280   74485 pod_ready.go:93] pod "kube-proxy-h8bpj" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.671302   74485 pod_ready.go:82] duration metric: took 4.49242ms for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671311   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675745   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.675765   74485 pod_ready.go:82] duration metric: took 4.446707ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675779   74485 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:55.497054   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.022032642s)
	I0818 20:08:55.497090   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0818 20:08:55.497116   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.022155942s)
	I0818 20:08:55.497157   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.022131358s)
	I0818 20:08:55.497168   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0818 20:08:55.497227   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.497273   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.497313   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.991355489s)
	I0818 20:08:55.497274   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.991406662s)
	I0818 20:08:55.497362   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.497369   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.497393   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (1.991466215s)
	I0818 20:08:55.497409   73711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.697646009s)
	I0818 20:08:55.497439   73711 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0818 20:08:55.497455   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:55.497468   73711 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.497504   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:55.590490   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.608567   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.608583   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.608658   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 20:08:55.608707   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.608728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0818 20:08:55.608741   73711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.608756   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:55.608768   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.660747   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 20:08:55.660856   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:08:55.701347   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 20:08:55.701376   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.701433   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:08:55.717056   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 20:08:55.717159   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:08:59.680640   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.071854332s)
	I0818 20:08:59.680673   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0818 20:08:59.680700   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (4.071919945s)
	I0818 20:08:59.680728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0818 20:08:59.680739   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680755   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (4.019877135s)
	I0818 20:08:59.680781   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0818 20:08:59.680792   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.97939667s)
	I0818 20:08:59.680802   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680818   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (3.979373996s)
	I0818 20:08:59.680833   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0818 20:08:59.680847   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:59.680876   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (3.96370085s)
	I0818 20:08:59.680895   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.241463   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:00.241492   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:59.683057   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:02.183113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:01.753708   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.072881673s)
	I0818 20:09:01.753739   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.072859667s)
	I0818 20:09:01.753786   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 20:09:01.753747   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0818 20:09:01.753866   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:01.753870   73711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:01.753922   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:03.515107   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.761161853s)
	I0818 20:09:03.515136   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0818 20:09:03.515142   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.761255334s)
	I0818 20:09:03.515162   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:03.515170   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0818 20:09:03.515223   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.741235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.910002   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.239901   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.682962   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.183678   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:05.463531   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.948279133s)
	I0818 20:09:05.463559   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0818 20:09:05.463585   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:05.463629   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:07.525332   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.061676855s)
	I0818 20:09:07.525365   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0818 20:09:07.525401   73711 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:07.525473   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:08.178855   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 20:09:08.178894   73711 cache_images.go:123] Successfully loaded all cached images
	I0818 20:09:08.178900   73711 cache_images.go:92] duration metric: took 15.254860831s to LoadCachedImages
	I0818 20:09:08.178915   73711 kubeadm.go:934] updating node { 192.168.61.228 8443 v1.31.0 crio true true} ...
	I0818 20:09:08.179070   73711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:09:08.179163   73711 ssh_runner.go:195] Run: crio config
	I0818 20:09:08.229392   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:08.229418   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:08.229429   73711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:09:08.229453   73711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.228 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944426 NodeName:no-preload-944426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:09:08.229598   73711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944426"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:09:08.229657   73711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:09:08.240023   73711 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:09:08.240121   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:09:08.249808   73711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 20:09:08.266663   73711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:09:08.284042   73711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0818 20:09:08.302210   73711 ssh_runner.go:195] Run: grep 192.168.61.228	control-plane.minikube.internal$ /etc/hosts
	I0818 20:09:08.306321   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:09:08.318674   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:08.437701   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:08.462861   73711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426 for IP: 192.168.61.228
	I0818 20:09:08.462889   73711 certs.go:194] generating shared ca certs ...
	I0818 20:09:08.462909   73711 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:08.463099   73711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:09:08.463166   73711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:09:08.463178   73711 certs.go:256] generating profile certs ...
	I0818 20:09:08.463297   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/client.key
	I0818 20:09:08.463400   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key.ec9e396f
	I0818 20:09:08.463459   73711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key
	I0818 20:09:08.463622   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:09:08.463663   73711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:09:08.463676   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:09:08.463718   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:09:08.463748   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:09:08.463780   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:09:08.463827   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:09:08.464500   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:09:08.497860   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:09:08.550536   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:09:08.593972   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:09:08.625691   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 20:09:08.652285   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:09:08.676175   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:09:08.703870   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:09:08.729102   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:09:08.758017   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:09:08.783528   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:09:08.808211   73711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:09:08.825465   73711 ssh_runner.go:195] Run: openssl version
	I0818 20:09:08.831856   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:09:08.843336   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847774   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847824   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.854110   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:09:08.865279   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:09:08.876107   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880723   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880786   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.886526   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:09:08.898139   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:09:08.909258   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.913957   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.914015   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.919888   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:09:08.933118   73711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:09:08.937979   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:09:08.944427   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:09:08.950686   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:09:08.956949   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:09:08.963201   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:09:08.969284   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:09:08.975411   73711 kubeadm.go:392] StartCluster: {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:09:08.975501   73711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:09:08.975543   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.019794   73711 cri.go:89] found id: ""
	I0818 20:09:09.019859   73711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:09:09.030614   73711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:09:09.030635   73711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:09:09.030689   73711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:09:09.041513   73711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:09:09.042532   73711 kubeconfig.go:125] found "no-preload-944426" server: "https://192.168.61.228:8443"
	I0818 20:09:09.044606   73711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:09:09.054823   73711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.228
	I0818 20:09:09.054855   73711 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:09:09.054867   73711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:09:09.054919   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.096324   73711 cri.go:89] found id: ""
	I0818 20:09:09.096412   73711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:09:09.112752   73711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:09:09.122515   73711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:09:09.122537   73711 kubeadm.go:157] found existing configuration files:
	
	I0818 20:09:09.122578   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:09:09.131551   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:09:09.131604   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:09:09.140888   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:09:09.149865   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:09:09.149920   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:09:09.159008   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.168220   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:09:09.168279   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.177638   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:09:09.187508   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:09:09.187567   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:09:09.196657   73711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:09:09.206117   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:09.331465   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.242594   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.738983   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:09.682305   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.683106   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:10.574796   73711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243293266s)
	I0818 20:09:10.574822   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.778850   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.843088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.931752   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:09:10.931846   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.432245   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.932577   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.948423   73711 api_server.go:72] duration metric: took 1.016687944s to wait for apiserver process to appear ...
	I0818 20:09:11.948449   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:09:11.948477   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:11.948946   73711 api_server.go:269] stopped: https://192.168.61.228:8443/healthz: Get "https://192.168.61.228:8443/healthz": dial tcp 192.168.61.228:8443: connect: connection refused
	I0818 20:09:12.448725   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.739963   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.739993   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.740010   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.750388   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.750411   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.948679   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.956174   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:14.956205   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.449273   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.453840   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.453870   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:15.949138   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.958790   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.958813   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:16.449521   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:16.453975   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:09:16.460298   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:09:16.460323   73711 api_server.go:131] duration metric: took 4.511867816s to wait for apiserver health ...
	I0818 20:09:16.460330   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:16.460339   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:16.462141   73711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:09:13.740020   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.238126   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:13.683910   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.182408   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.463457   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:09:16.474867   73711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:09:16.494479   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:09:16.502870   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:09:16.502898   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:09:16.502906   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:09:16.502917   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:09:16.502926   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:09:16.502937   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:09:16.502951   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:09:16.502959   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:09:16.502964   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:09:16.502970   73711 system_pods.go:74] duration metric: took 8.468743ms to wait for pod list to return data ...
	I0818 20:09:16.502977   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:09:16.507863   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:09:16.507884   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:09:16.507893   73711 node_conditions.go:105] duration metric: took 4.912203ms to run NodePressure ...
	I0818 20:09:16.507907   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:16.779765   73711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790746   73711 kubeadm.go:739] kubelet initialised
	I0818 20:09:16.790771   73711 kubeadm.go:740] duration metric: took 10.982299ms waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790780   73711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:16.799544   73711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.806805   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806826   73711 pod_ready.go:82] duration metric: took 7.251632ms for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.806835   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806841   73711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.813614   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813646   73711 pod_ready.go:82] duration metric: took 6.794013ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.813656   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813664   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.818982   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819016   73711 pod_ready.go:82] duration metric: took 5.338981ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.819028   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819037   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.898401   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898433   73711 pod_ready.go:82] duration metric: took 79.37927ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.898446   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898454   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.297663   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297697   73711 pod_ready.go:82] duration metric: took 399.23365ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.297706   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297712   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.697884   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697909   73711 pod_ready.go:82] duration metric: took 400.191092ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.697919   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697925   73711 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:18.099008   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099034   73711 pod_ready.go:82] duration metric: took 401.09908ms for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:18.099044   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099050   73711 pod_ready.go:39] duration metric: took 1.30825923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:18.099071   73711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:09:18.111862   73711 ops.go:34] apiserver oom_adj: -16
	I0818 20:09:18.111888   73711 kubeadm.go:597] duration metric: took 9.081245207s to restartPrimaryControlPlane
	I0818 20:09:18.111901   73711 kubeadm.go:394] duration metric: took 9.136525478s to StartCluster
	I0818 20:09:18.111931   73711 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.112017   73711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:09:18.114460   73711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.114771   73711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:09:18.114885   73711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:09:18.114987   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:09:18.115022   73711 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944426"
	I0818 20:09:18.115036   73711 addons.go:69] Setting default-storageclass=true in profile "no-preload-944426"
	I0818 20:09:18.115059   73711 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944426"
	I0818 20:09:18.115075   73711 addons.go:69] Setting metrics-server=true in profile "no-preload-944426"
	W0818 20:09:18.115082   73711 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:09:18.115095   73711 addons.go:234] Setting addon metrics-server=true in "no-preload-944426"
	I0818 20:09:18.115067   73711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944426"
	W0818 20:09:18.115104   73711 addons.go:243] addon metrics-server should already be in state true
	I0818 20:09:18.115122   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115132   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115517   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115530   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115541   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115553   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115560   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115592   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.117511   73711 out.go:177] * Verifying Kubernetes components...
	I0818 20:09:18.118740   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:18.133596   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0818 20:09:18.134093   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.134661   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.134685   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.135066   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.135263   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.136138   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0818 20:09:18.136520   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.136981   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.137004   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.137353   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.137911   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.137957   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.138952   73711 addons.go:234] Setting addon default-storageclass=true in "no-preload-944426"
	W0818 20:09:18.138975   73711 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:09:18.139001   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.139356   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.139413   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.155618   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0818 20:09:18.156076   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.156666   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.156687   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.157086   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.157669   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.157700   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.158080   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0818 20:09:18.158422   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.158850   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.158868   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.158888   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0818 20:09:18.159237   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.159282   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.159455   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.159741   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.159763   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.160108   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.160582   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.160606   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.165108   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.166977   73711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:09:18.168139   73711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.168156   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:09:18.168174   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.171426   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172004   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.172041   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172082   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.172238   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.172336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.172423   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.175961   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0818 20:09:18.176421   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.176543   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0818 20:09:18.176861   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.176875   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.177065   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.177176   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.177345   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.177745   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.177762   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.178162   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.178336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.179445   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180238   73711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.180253   73711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:09:18.180275   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.181198   73711 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:09:18.182420   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:09:18.182447   73711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:09:18.182464   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.183457   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183499   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.183513   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183656   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.183820   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.183953   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.184112   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.185260   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185575   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.185588   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185754   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.185879   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.186013   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.186099   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.338778   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:18.356229   73711 node_ready.go:35] waiting up to 6m0s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:18.496927   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:09:18.496949   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:09:18.513205   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.540482   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:09:18.540505   73711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:09:18.544078   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.613315   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:18.613340   73711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:09:18.668416   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:19.638171   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094064475s)
	I0818 20:09:19.638274   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638299   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638177   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124933278s)
	I0818 20:09:19.638328   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638343   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638281   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638412   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638697   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638714   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638724   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638732   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638825   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638845   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638853   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638932   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638946   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638966   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638994   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639006   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638893   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.639016   73711 addons.go:475] Verifying addon metrics-server=true in "no-preload-944426"
	I0818 20:09:19.639024   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.639227   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.639401   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639416   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.640889   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.640905   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.640973   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647148   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.647169   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.647416   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.647460   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647448   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.649397   73711 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0818 20:09:19.650643   73711 addons.go:510] duration metric: took 1.535758897s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:18.240003   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.738804   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:18.184836   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.682274   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.360319   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:22.860498   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:22.739478   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.238989   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.239357   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:23.181992   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.182417   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.183071   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.360413   73711 node_ready.go:49] node "no-preload-944426" has status "Ready":"True"
	I0818 20:09:25.360449   73711 node_ready.go:38] duration metric: took 7.004187421s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:25.360462   73711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:25.366498   73711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:27.373766   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.873098   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:29.738977   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.238945   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.683136   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.182825   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:31.873262   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.372828   73711 pod_ready.go:93] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.372849   73711 pod_ready.go:82] duration metric: took 7.006326702s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.372858   73711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376709   73711 pod_ready.go:93] pod "etcd-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.376732   73711 pod_ready.go:82] duration metric: took 3.867173ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376743   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380703   73711 pod_ready.go:93] pod "kube-apiserver-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.380722   73711 pod_ready.go:82] duration metric: took 3.970732ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380733   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385137   73711 pod_ready.go:93] pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.385159   73711 pod_ready.go:82] duration metric: took 4.417483ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385171   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390646   73711 pod_ready.go:93] pod "kube-proxy-2l6g8" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.390702   73711 pod_ready.go:82] duration metric: took 5.522399ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390713   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772352   73711 pod_ready.go:93] pod "kube-scheduler-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.772374   73711 pod_ready.go:82] duration metric: took 381.654122ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772384   73711 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:34.779615   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:34.239095   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.738310   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:34.683291   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:37.181676   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.780369   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.278364   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:38.740139   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.238400   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.181867   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.182275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.781697   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:43.239274   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.739280   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.182744   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.684210   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:46.278261   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.279139   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:47.739502   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.239148   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.181581   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.181939   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.182295   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.779145   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.779195   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.779440   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:52.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:55.239445   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.682549   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.182480   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.278685   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.279456   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.739205   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.739780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:02.240023   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.182638   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.182974   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.778759   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:04.279122   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.240223   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.738829   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:03.183498   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:05.681527   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.682456   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.281289   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:08.778838   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:09.238297   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:11.240258   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.182214   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:12.182485   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.778909   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.279849   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:13.739554   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.240247   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:14.182841   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.682427   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:15.778555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:17.779315   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:18.739993   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.241320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:19.182059   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.681310   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:20.278905   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.779594   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:23.739354   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.739406   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:23.682161   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.683137   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.279240   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:27.778736   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.238417   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.240302   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:28.182371   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.182431   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.182538   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.277705   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.778467   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.739086   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:35.239575   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.682089   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:36.682424   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.277613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:39.778047   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:37.743430   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.240404   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:38.682667   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.182697   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.779091   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.780368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:42.739140   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:44.739349   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.739985   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.681897   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:45.682347   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.682385   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.278368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:48.777847   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:49.238888   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:51.239699   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.182672   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.683492   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.778683   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.778843   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:53.240375   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.738235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.181390   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.181940   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.278430   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.278961   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.778449   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:57.740046   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:00.239797   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.182402   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.182516   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.779677   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.277808   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.740702   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:07.239660   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:03.682270   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.682964   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:06.278085   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.781247   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:09.240113   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.739320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.182148   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:10.681873   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:12.682490   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.278330   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:13.278868   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:14.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:16.739103   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.688049   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.779050   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.779324   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:19.238499   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:21.239793   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.181477   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.181514   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.278360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.779285   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:23.739816   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.243360   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:24.182434   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.182980   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:25.277558   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.278088   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:29.278655   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:28.739673   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.238716   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:28.681620   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:30.682755   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:32.682969   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.778900   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.240237   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.739311   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.182357   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:37.682275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.278357   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:38.279370   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:38.239229   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.240416   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:39.682587   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.187237   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.779225   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.779359   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.779661   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:42.739642   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.240529   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.681898   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:46.683048   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:47.283360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.780428   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:47.739118   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.740073   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.238919   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.182997   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.682384   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.279233   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:54.779470   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.239638   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:56.240123   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:53.682822   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:55.683248   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.279035   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:59.779371   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:58.240182   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.240387   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:58.182085   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.682855   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:02.278506   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:04.279952   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:02.739623   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.239503   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.240563   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:03.182703   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.183099   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.682942   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:06.780051   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:09.283183   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:09.738372   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.738978   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:10.183297   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:12.682617   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.778895   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.779613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:14.239433   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:14.733714   73815 pod_ready.go:82] duration metric: took 4m0.000909376s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:14.733756   73815 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:14.733773   73815 pod_ready.go:39] duration metric: took 4m10.006922238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:14.733798   73815 kubeadm.go:597] duration metric: took 4m18.227938977s to restartPrimaryControlPlane
	W0818 20:12:14.733854   73815 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:14.733884   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:15.182539   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:17.682113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.278810   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:18.279513   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:19.682712   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.182462   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:20.777741   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.779468   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:24.785394   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:24.682001   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.182491   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.278406   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.279272   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.682104   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:32.181795   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:31.779163   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:33.782706   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:34.183088   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.682409   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.278136   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:38.278938   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.943045   73815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.209137834s)
	I0818 20:12:40.943131   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:40.961902   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:40.984956   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:41.000828   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:41.000855   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:41.000908   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:41.019730   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:41.019782   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:41.031694   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:41.052082   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:41.052133   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:41.061682   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.070983   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:41.071036   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.083122   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:41.092977   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:41.093041   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:41.103081   73815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:41.155300   73815 kubeadm.go:310] W0818 20:12:41.112032    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.156131   73815 kubeadm.go:310] W0818 20:12:41.113028    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.270071   73815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:39.183290   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:41.682301   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.777979   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:42.779754   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:44.779992   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:43.683501   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:46.181489   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.616338   73815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:12:49.616432   73815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:12:49.616546   73815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:12:49.616675   73815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:12:49.616784   73815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:12:49.616877   73815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:12:49.618287   73815 out.go:235]   - Generating certificates and keys ...
	I0818 20:12:49.618354   73815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:12:49.618414   73815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:12:49.618486   73815 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:12:49.618537   73815 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:12:49.618598   73815 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:12:49.618648   73815 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:12:49.618700   73815 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:12:49.618779   73815 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:12:49.618892   73815 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:12:49.619007   73815 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:12:49.619065   73815 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:12:49.619163   73815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:12:49.619214   73815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:12:49.619269   73815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:12:49.619331   73815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:12:49.619436   73815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:12:49.619486   73815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:12:49.619556   73815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:12:49.619619   73815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:12:49.621003   73815 out.go:235]   - Booting up control plane ...
	I0818 20:12:49.621109   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:12:49.621195   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:12:49.621272   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:12:49.621380   73815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:12:49.621464   73815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:12:49.621507   73815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:12:49.621621   73815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:12:49.621715   73815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:12:49.621773   73815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.427168ms
	I0818 20:12:49.621843   73815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:12:49.621894   73815 kubeadm.go:310] [api-check] The API server is healthy after 5.00297116s
	I0818 20:12:49.621989   73815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:12:49.622127   73815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:12:49.622192   73815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:12:49.622366   73815 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-291295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:12:49.622416   73815 kubeadm.go:310] [bootstrap-token] Using token: y7e2le.i0q1jk5v0c0u0zuw
	I0818 20:12:49.623896   73815 out.go:235]   - Configuring RBAC rules ...
	I0818 20:12:49.623979   73815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:12:49.624091   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:12:49.624245   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:12:49.624354   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:12:49.624455   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:12:49.624526   73815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:12:49.624621   73815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:12:49.624675   73815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:12:49.624718   73815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:12:49.624724   73815 kubeadm.go:310] 
	I0818 20:12:49.624819   73815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:12:49.624835   73815 kubeadm.go:310] 
	I0818 20:12:49.624933   73815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:12:49.624943   73815 kubeadm.go:310] 
	I0818 20:12:49.624975   73815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:12:49.625066   73815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:12:49.625122   73815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:12:49.625135   73815 kubeadm.go:310] 
	I0818 20:12:49.625210   73815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:12:49.625217   73815 kubeadm.go:310] 
	I0818 20:12:49.625285   73815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:12:49.625295   73815 kubeadm.go:310] 
	I0818 20:12:49.625364   73815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:12:49.625469   73815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:12:49.625552   73815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:12:49.625563   73815 kubeadm.go:310] 
	I0818 20:12:49.625675   73815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:12:49.625756   73815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:12:49.625763   73815 kubeadm.go:310] 
	I0818 20:12:49.625858   73815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.625943   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:12:49.625967   73815 kubeadm.go:310] 	--control-plane 
	I0818 20:12:49.625976   73815 kubeadm.go:310] 
	I0818 20:12:49.626089   73815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:12:49.626099   73815 kubeadm.go:310] 
	I0818 20:12:49.626196   73815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.626293   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:12:49.626302   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:12:49.626308   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:12:49.627714   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:12:47.280266   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.779502   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.628998   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:12:49.639640   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:12:49.657017   73815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-291295 minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=embed-certs-291295 minikube.k8s.io/primary=true
	I0818 20:12:49.685420   73815 ops.go:34] apiserver oom_adj: -16
	I0818 20:12:49.868146   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.368174   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.868256   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.368427   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.868632   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:52.368585   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:48.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:50.681743   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.683179   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.869122   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.368635   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.869162   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.368223   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.490893   73815 kubeadm.go:1113] duration metric: took 4.833865719s to wait for elevateKubeSystemPrivileges
	I0818 20:12:54.490919   73815 kubeadm.go:394] duration metric: took 4m58.032922921s to StartCluster
	I0818 20:12:54.490936   73815 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.491011   73815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:12:54.492769   73815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.493007   73815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:12:54.493069   73815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:12:54.493160   73815 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-291295"
	I0818 20:12:54.493186   73815 addons.go:69] Setting default-storageclass=true in profile "embed-certs-291295"
	I0818 20:12:54.493208   73815 addons.go:69] Setting metrics-server=true in profile "embed-certs-291295"
	I0818 20:12:54.493226   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:12:54.493234   73815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-291295"
	I0818 20:12:54.493250   73815 addons.go:234] Setting addon metrics-server=true in "embed-certs-291295"
	W0818 20:12:54.493263   73815 addons.go:243] addon metrics-server should already be in state true
	I0818 20:12:54.493293   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493197   73815 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-291295"
	W0818 20:12:54.493423   73815 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:12:54.493454   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493667   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493695   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493799   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493824   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493839   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493856   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.494988   73815 out.go:177] * Verifying Kubernetes components...
	I0818 20:12:54.496631   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0818 20:12:54.510362   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0818 20:12:54.510861   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510893   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510904   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.511362   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511394   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511392   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511411   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511512   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511532   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511721   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511770   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511858   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.512040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.512246   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512269   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512275   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.512287   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.515662   73815 addons.go:234] Setting addon default-storageclass=true in "embed-certs-291295"
	W0818 20:12:54.515684   73815 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:12:54.515713   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.516066   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.516113   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.532752   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0818 20:12:54.532798   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0818 20:12:54.533454   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.533570   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.534099   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534122   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534237   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534256   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534374   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534590   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534626   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.534665   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0818 20:12:54.534909   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.535373   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.535793   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.535808   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.536326   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.536411   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.536941   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.538860   73815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:12:54.538862   73815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:12:52.279487   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.279652   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.539061   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.539290   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.540006   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:12:54.540024   73815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:12:54.540043   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.540104   73815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.540119   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:12:54.540144   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.543782   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544017   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544131   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544154   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544293   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544565   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.544734   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.544754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544887   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.545060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.545257   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.545502   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.558292   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0818 20:12:54.558721   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.559184   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.559200   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.559579   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.559764   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.561412   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.562138   73815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.562153   73815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:12:54.562169   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.565078   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565524   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.565543   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565782   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.565954   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.566107   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.566265   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.738286   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:12:54.804581   73815 node_ready.go:35] waiting up to 6m0s for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813953   73815 node_ready.go:49] node "embed-certs-291295" has status "Ready":"True"
	I0818 20:12:54.813984   73815 node_ready.go:38] duration metric: took 9.367719ms for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813995   73815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:54.820670   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:12:54.884787   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:12:54.884808   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:12:54.891500   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.917894   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.939854   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:12:54.939873   73815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:12:55.023663   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:55.023684   73815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:12:55.049846   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:56.106099   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.188173933s)
	I0818 20:12:56.106164   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106173   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106502   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106504   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.106519   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.106529   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106537   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106774   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106788   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107412   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21588373s)
	I0818 20:12:56.107447   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107459   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.107656   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.107729   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.107739   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107747   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.108054   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.108095   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.108105   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.163788   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.163816   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.164087   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.164137   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239269   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189381338s)
	I0818 20:12:56.239327   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239341   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.239712   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.239767   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239748   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.239782   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239792   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.240000   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.240017   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.240028   73815 addons.go:475] Verifying addon metrics-server=true in "embed-certs-291295"
	I0818 20:12:56.241750   73815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:12:56.243157   73815 addons.go:510] duration metric: took 1.750082977s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:12:56.827912   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:55.184449   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:57.676039   74485 pod_ready.go:82] duration metric: took 4m0.000245975s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:57.676064   74485 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:57.676106   74485 pod_ready.go:39] duration metric: took 4m11.533331444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:57.676138   74485 kubeadm.go:597] duration metric: took 4m20.628972956s to restartPrimaryControlPlane
	W0818 20:12:57.676203   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:57.676230   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:56.778171   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:58.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:59.328683   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.331560   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.281134   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.281507   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.828543   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.828572   73815 pod_ready.go:82] duration metric: took 9.007869564s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.828586   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833396   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.833416   73815 pod_ready.go:82] duration metric: took 4.823533ms for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833426   73815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837837   73815 pod_ready.go:93] pod "etcd-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.837856   73815 pod_ready.go:82] duration metric: took 4.422926ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837864   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842646   73815 pod_ready.go:93] pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.842666   73815 pod_ready.go:82] duration metric: took 4.795789ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842675   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846697   73815 pod_ready.go:93] pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.846721   73815 pod_ready.go:82] duration metric: took 4.038999ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846733   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224066   73815 pod_ready.go:93] pod "kube-proxy-8mv85" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.224088   73815 pod_ready.go:82] duration metric: took 377.347897ms for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224097   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624310   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.624337   73815 pod_ready.go:82] duration metric: took 400.233574ms for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624347   73815 pod_ready.go:39] duration metric: took 9.810340936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:04.624363   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:04.624440   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:04.640514   73815 api_server.go:72] duration metric: took 10.147475745s to wait for apiserver process to appear ...
	I0818 20:13:04.640543   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:04.640565   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:13:04.646120   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:13:04.646969   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:04.646989   73815 api_server.go:131] duration metric: took 6.438722ms to wait for apiserver health ...
	I0818 20:13:04.646999   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:04.828347   73815 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:04.828385   73815 system_pods.go:61] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:04.828393   73815 system_pods.go:61] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:04.828398   73815 system_pods.go:61] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:04.828403   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:04.828416   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:04.828420   73815 system_pods.go:61] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:04.828425   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:04.828434   73815 system_pods.go:61] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:04.828441   73815 system_pods.go:61] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:04.828453   73815 system_pods.go:74] duration metric: took 181.44906ms to wait for pod list to return data ...
	I0818 20:13:04.828465   73815 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:05.030945   73815 default_sa.go:45] found service account: "default"
	I0818 20:13:05.030971   73815 default_sa.go:55] duration metric: took 202.497269ms for default service account to be created ...
	I0818 20:13:05.030981   73815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:05.226724   73815 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:05.226760   73815 system_pods.go:89] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:05.226769   73815 system_pods.go:89] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:05.226775   73815 system_pods.go:89] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:05.226781   73815 system_pods.go:89] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:05.226790   73815 system_pods.go:89] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:05.226795   73815 system_pods.go:89] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:05.226801   73815 system_pods.go:89] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:05.226810   73815 system_pods.go:89] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:05.226820   73815 system_pods.go:89] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:05.226831   73815 system_pods.go:126] duration metric: took 195.843628ms to wait for k8s-apps to be running ...
	I0818 20:13:05.226843   73815 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:05.226892   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:05.242656   73815 system_svc.go:56] duration metric: took 15.80684ms WaitForService to wait for kubelet
	I0818 20:13:05.242681   73815 kubeadm.go:582] duration metric: took 10.749648174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:05.242698   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:05.424616   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:05.424642   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:05.424654   73815 node_conditions.go:105] duration metric: took 181.951421ms to run NodePressure ...
	I0818 20:13:05.424668   73815 start.go:241] waiting for startup goroutines ...
	I0818 20:13:05.424678   73815 start.go:246] waiting for cluster config update ...
	I0818 20:13:05.424692   73815 start.go:255] writing updated cluster config ...
	I0818 20:13:05.425003   73815 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:05.470859   73815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:05.472909   73815 out.go:177] * Done! kubectl is now configured to use "embed-certs-291295" cluster and "default" namespace by default
	I0818 20:13:05.779555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:07.783567   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:10.281617   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:12.780570   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:15.282024   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:17.779399   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:23.788389   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.112134895s)
	I0818 20:13:23.788470   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:23.808611   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:13:23.820139   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:13:23.837253   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:13:23.837282   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:13:23.837345   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:13:23.848522   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:13:23.848595   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:13:23.857891   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:13:23.866756   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:13:23.866814   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:13:23.876332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.885435   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:13:23.885535   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.896120   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:13:23.905471   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:13:23.905565   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:13:23.915157   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:13:23.963756   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:13:23.963830   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:13:24.083423   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:13:24.083592   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:13:24.083733   74485 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:13:24.097967   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:13:24.099859   74485 out.go:235]   - Generating certificates and keys ...
	I0818 20:13:24.099926   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:13:24.100020   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:13:24.100125   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:13:24.100212   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:13:24.100310   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:13:24.100389   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:13:24.100476   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:13:24.100592   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:13:24.100711   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:13:24.100829   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:13:24.100891   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:13:24.100978   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:13:24.298737   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:13:24.592511   74485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:13:24.686316   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:13:24.796124   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:13:24.910646   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:13:24.911060   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:13:24.913486   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:13:20.281479   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:22.779269   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:24.914894   74485 out.go:235]   - Booting up control plane ...
	I0818 20:13:24.915018   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:13:24.915106   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:13:24.915303   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:13:24.938289   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:13:24.944304   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:13:24.944367   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:13:25.078685   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:13:25.078813   74485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:13:25.580725   74485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092954ms
	I0818 20:13:25.580847   74485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:13:25.280695   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:27.285875   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:29.779058   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:30.583574   74485 kubeadm.go:310] [api-check] The API server is healthy after 5.001121585s
	I0818 20:13:30.596453   74485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:13:30.616459   74485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:13:30.647753   74485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:13:30.648063   74485 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-852598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:13:30.661702   74485 kubeadm.go:310] [bootstrap-token] Using token: zx02gp.uvda3nvhhfc3i2l5
	I0818 20:13:30.663166   74485 out.go:235]   - Configuring RBAC rules ...
	I0818 20:13:30.663321   74485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:13:30.671440   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:13:30.682462   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:13:30.690376   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:13:30.699091   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:13:30.704304   74485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:13:30.989576   74485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:13:31.435191   74485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:13:31.989155   74485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:13:31.991090   74485 kubeadm.go:310] 
	I0818 20:13:31.991172   74485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:13:31.991188   74485 kubeadm.go:310] 
	I0818 20:13:31.991285   74485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:13:31.991303   74485 kubeadm.go:310] 
	I0818 20:13:31.991337   74485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:13:31.991506   74485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:13:31.991584   74485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:13:31.991605   74485 kubeadm.go:310] 
	I0818 20:13:31.991710   74485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:13:31.991732   74485 kubeadm.go:310] 
	I0818 20:13:31.991802   74485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:13:31.991814   74485 kubeadm.go:310] 
	I0818 20:13:31.991881   74485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:13:31.991986   74485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:13:31.992101   74485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:13:31.992132   74485 kubeadm.go:310] 
	I0818 20:13:31.992250   74485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:13:31.992345   74485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:13:31.992358   74485 kubeadm.go:310] 
	I0818 20:13:31.992464   74485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.992601   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:13:31.992637   74485 kubeadm.go:310] 	--control-plane 
	I0818 20:13:31.992650   74485 kubeadm.go:310] 
	I0818 20:13:31.992760   74485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:13:31.992778   74485 kubeadm.go:310] 
	I0818 20:13:31.992882   74485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.993030   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:13:31.994898   74485 kubeadm.go:310] W0818 20:13:23.918436    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995217   74485 kubeadm.go:310] W0818 20:13:23.919152    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995365   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:13:31.995413   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:13:31.995423   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:13:31.997188   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:13:31.998506   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:13:32.011472   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:13:32.031405   74485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:13:32.031449   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.031494   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-852598 minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=default-k8s-diff-port-852598 minikube.k8s.io/primary=true
	I0818 20:13:32.244997   74485 ops.go:34] apiserver oom_adj: -16
	I0818 20:13:32.245096   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.745775   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.279538   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:32.779152   73711 pod_ready.go:82] duration metric: took 4m0.006755386s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:13:32.779180   73711 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 20:13:32.779190   73711 pod_ready.go:39] duration metric: took 4m7.418715902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:32.779207   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:32.779240   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:32.779298   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:32.848109   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:32.848132   73711 cri.go:89] found id: ""
	I0818 20:13:32.848141   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:32.848201   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.852725   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:32.852789   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:32.899932   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:32.899957   73711 cri.go:89] found id: ""
	I0818 20:13:32.899969   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:32.900028   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.904698   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:32.904771   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:32.945320   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:32.945347   73711 cri.go:89] found id: ""
	I0818 20:13:32.945355   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:32.945411   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.949873   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:32.949935   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:32.986388   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:32.986409   73711 cri.go:89] found id: ""
	I0818 20:13:32.986415   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:32.986465   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.992213   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:32.992292   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:33.035535   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:33.035557   73711 cri.go:89] found id: ""
	I0818 20:13:33.035564   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:33.035622   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.039933   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:33.040006   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:33.077372   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:33.077395   73711 cri.go:89] found id: ""
	I0818 20:13:33.077404   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:33.077468   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.082254   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:33.082327   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:33.120142   73711 cri.go:89] found id: ""
	I0818 20:13:33.120181   73711 logs.go:276] 0 containers: []
	W0818 20:13:33.120192   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:33.120199   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:33.120267   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:33.159065   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:33.159089   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.159095   73711 cri.go:89] found id: ""
	I0818 20:13:33.159104   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:33.159164   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.163366   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.167301   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:33.167327   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:33.207982   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:33.208012   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:33.734525   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:33.734563   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:33.779286   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:33.779334   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:33.915330   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:33.915365   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:33.930057   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:33.930088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:33.978282   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:33.978312   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:34.021464   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:34.021495   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:34.058242   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:34.058271   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:34.094203   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:34.094231   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:34.157812   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:34.157849   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:34.196259   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:34.196288   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:34.273774   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:34.273818   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.245388   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:33.745166   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.245920   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.745548   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.245436   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.745269   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.245383   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.384146   74485 kubeadm.go:1113] duration metric: took 4.352781371s to wait for elevateKubeSystemPrivileges
	I0818 20:13:36.384182   74485 kubeadm.go:394] duration metric: took 4m59.395903283s to StartCluster
	I0818 20:13:36.384199   74485 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.384286   74485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:13:36.385964   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.386201   74485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:13:36.386320   74485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:13:36.386400   74485 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386423   74485 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386440   74485 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386458   74485 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386470   74485 addons.go:243] addon metrics-server should already be in state true
	I0818 20:13:36.386477   74485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-852598"
	I0818 20:13:36.386514   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386434   74485 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386567   74485 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:13:36.386612   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386435   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:13:36.386858   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386887   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386915   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386948   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386982   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.387015   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.387748   74485 out.go:177] * Verifying Kubernetes components...
	I0818 20:13:36.389177   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:13:36.402895   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0818 20:13:36.402928   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0818 20:13:36.403477   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.403479   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404111   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404120   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404519   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404525   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404795   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.405161   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.405192   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.405739   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0818 20:13:36.406246   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.406753   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.406779   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.407167   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.407726   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.407771   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.408687   74485 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.408710   74485 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:13:36.408736   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.409073   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.409120   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.423471   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 20:13:36.423953   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.424569   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.424588   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.424652   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0818 20:13:36.424966   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.425039   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.425257   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.425447   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.425462   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.425911   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.426098   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.427104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.427772   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.428108   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0818 20:13:36.428438   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.428794   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.428816   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.429092   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.429645   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.429696   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.429708   74485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:13:36.429758   74485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:13:36.431859   74485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.431879   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:13:36.431898   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.431958   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:13:36.431969   74485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:13:36.431983   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.435295   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435730   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.435757   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436192   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.436238   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.436254   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.436312   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.436528   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436570   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.436890   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.437171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.437355   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.447762   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0818 20:13:36.448303   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.448694   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.448713   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.449011   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.449160   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.450722   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.450918   74485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.450935   74485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:13:36.450954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.453529   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.453969   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.453992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.454163   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.454862   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.455104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.455246   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.606178   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:13:36.628852   74485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702927   74485 node_ready.go:49] node "default-k8s-diff-port-852598" has status "Ready":"True"
	I0818 20:13:36.702956   74485 node_ready.go:38] duration metric: took 74.077289ms for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702968   74485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:36.713446   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:36.726670   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:13:36.726689   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:13:36.741673   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.784451   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.790772   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:13:36.790798   74485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:13:36.845289   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:36.845315   74485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:13:36.914259   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:37.542511   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542538   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542559   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542543   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542874   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542914   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542922   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542932   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542941   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542953   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542963   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542971   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.543114   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.543123   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545016   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.545041   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545059   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572618   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.572643   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.572953   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572976   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.572989   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.793891   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.793918   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794436   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.794453   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794467   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794479   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.794487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794747   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794762   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794774   74485 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-852598"
	I0818 20:13:37.796423   74485 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:13:36.814874   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:36.838208   73711 api_server.go:72] duration metric: took 4m18.723396382s to wait for apiserver process to appear ...
	I0818 20:13:36.838234   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:36.838276   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:36.838334   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:36.890010   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:36.890036   73711 cri.go:89] found id: ""
	I0818 20:13:36.890046   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:36.890108   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.895675   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:36.895753   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:36.953110   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:36.953162   73711 cri.go:89] found id: ""
	I0818 20:13:36.953172   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:36.953230   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.959359   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:36.959456   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:37.011217   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.011248   73711 cri.go:89] found id: ""
	I0818 20:13:37.011258   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:37.011333   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.016895   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:37.016988   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:37.067705   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:37.067728   73711 cri.go:89] found id: ""
	I0818 20:13:37.067737   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:37.067794   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.073259   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:37.073332   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:37.112192   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.112216   73711 cri.go:89] found id: ""
	I0818 20:13:37.112226   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:37.112285   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.116988   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:37.117060   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:37.153720   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.153744   73711 cri.go:89] found id: ""
	I0818 20:13:37.153753   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:37.153811   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.158160   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:37.158226   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:37.197088   73711 cri.go:89] found id: ""
	I0818 20:13:37.197120   73711 logs.go:276] 0 containers: []
	W0818 20:13:37.197143   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:37.197151   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:37.197215   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:37.241214   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:37.241242   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:37.241248   73711 cri.go:89] found id: ""
	I0818 20:13:37.241257   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:37.241317   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.246159   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.250431   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:37.250460   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:37.313787   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:37.313817   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:37.333235   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:37.333263   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:37.461197   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:37.461236   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.505314   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:37.505343   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.576096   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:37.576121   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:38.083667   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:38.083702   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:38.128922   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:38.128947   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:38.170807   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:38.170842   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:38.265750   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:38.265784   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:38.323224   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:38.323269   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:38.372486   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:38.372530   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:38.413945   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:38.413986   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.798152   74485 addons.go:510] duration metric: took 1.411833485s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:13:38.719805   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.720446   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:40.720472   74485 pod_ready.go:82] duration metric: took 4.00699808s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:40.720482   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:42.728159   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.955186   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:13:40.960201   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:13:40.961240   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:40.961260   73711 api_server.go:131] duration metric: took 4.123017717s to wait for apiserver health ...
	I0818 20:13:40.961273   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:40.961298   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:40.961350   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:41.012093   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.012113   73711 cri.go:89] found id: ""
	I0818 20:13:41.012121   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:41.012172   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.016282   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:41.016337   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:41.063834   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.063861   73711 cri.go:89] found id: ""
	I0818 20:13:41.063871   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:41.063930   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.068645   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:41.068724   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:41.117544   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:41.117565   73711 cri.go:89] found id: ""
	I0818 20:13:41.117573   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:41.117626   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.121916   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:41.121985   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:41.161641   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:41.161660   73711 cri.go:89] found id: ""
	I0818 20:13:41.161667   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:41.161720   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.165727   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:41.165778   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:41.207519   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:41.207544   73711 cri.go:89] found id: ""
	I0818 20:13:41.207554   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:41.207615   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.212114   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:41.212171   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:41.255480   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:41.255501   73711 cri.go:89] found id: ""
	I0818 20:13:41.255508   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:41.255560   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.259585   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:41.259635   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:41.312099   73711 cri.go:89] found id: ""
	I0818 20:13:41.312124   73711 logs.go:276] 0 containers: []
	W0818 20:13:41.312131   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:41.312137   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:41.312201   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:41.358622   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:41.358647   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.358653   73711 cri.go:89] found id: ""
	I0818 20:13:41.358662   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:41.358723   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.363210   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.367271   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:41.367294   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.406329   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:41.406355   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:41.768140   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:41.768175   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:41.811010   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:41.811035   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:41.886206   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:41.886240   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.938249   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:41.938284   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.977289   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:41.977317   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:42.018606   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:42.018630   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:42.055557   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:42.055581   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:42.070467   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:42.070494   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:42.182068   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:42.182100   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:42.219346   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:42.219373   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:42.262193   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:42.262221   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:44.839152   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:13:44.839181   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.839186   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.839191   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.839194   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.839197   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.839200   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.839206   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.839212   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.839218   73711 system_pods.go:74] duration metric: took 3.877940537s to wait for pod list to return data ...
	I0818 20:13:44.839225   73711 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:44.841877   73711 default_sa.go:45] found service account: "default"
	I0818 20:13:44.841896   73711 default_sa.go:55] duration metric: took 2.662355ms for default service account to be created ...
	I0818 20:13:44.841904   73711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:44.846214   73711 system_pods.go:86] 8 kube-system pods found
	I0818 20:13:44.846240   73711 system_pods.go:89] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.846247   73711 system_pods.go:89] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.846252   73711 system_pods.go:89] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.846259   73711 system_pods.go:89] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.846264   73711 system_pods.go:89] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.846269   73711 system_pods.go:89] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.846279   73711 system_pods.go:89] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.846286   73711 system_pods.go:89] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.846296   73711 system_pods.go:126] duration metric: took 4.386348ms to wait for k8s-apps to be running ...
	I0818 20:13:44.846305   73711 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:44.846356   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:44.863225   73711 system_svc.go:56] duration metric: took 16.912117ms WaitForService to wait for kubelet
	I0818 20:13:44.863262   73711 kubeadm.go:582] duration metric: took 4m26.748456958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:44.863287   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:44.866049   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:44.866069   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:44.866082   73711 node_conditions.go:105] duration metric: took 2.789471ms to run NodePressure ...
	I0818 20:13:44.866095   73711 start.go:241] waiting for startup goroutines ...
	I0818 20:13:44.866103   73711 start.go:246] waiting for cluster config update ...
	I0818 20:13:44.866135   73711 start.go:255] writing updated cluster config ...
	I0818 20:13:44.866415   73711 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:44.914902   73711 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:44.916929   73711 out.go:177] * Done! kubectl is now configured to use "no-preload-944426" cluster and "default" namespace by default
	I0818 20:13:45.226521   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:47.226773   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:48.227026   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.227050   74485 pod_ready.go:82] duration metric: took 7.506560684s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.227061   74485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231313   74485 pod_ready.go:93] pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.231336   74485 pod_ready.go:82] duration metric: took 4.268255ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231345   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235228   74485 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.235249   74485 pod_ready.go:82] duration metric: took 3.897729ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235259   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238872   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.238889   74485 pod_ready.go:82] duration metric: took 3.623044ms for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238897   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243264   74485 pod_ready.go:93] pod "kube-proxy-hmvsl" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.243282   74485 pod_ready.go:82] duration metric: took 4.378808ms for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243292   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625076   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.625101   74485 pod_ready.go:82] duration metric: took 381.800619ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625111   74485 pod_ready.go:39] duration metric: took 11.92213071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:48.625128   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:48.625193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:48.640038   74485 api_server.go:72] duration metric: took 12.253809178s to wait for apiserver process to appear ...
	I0818 20:13:48.640061   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:48.640081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:13:48.644433   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:13:48.645289   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:48.645306   74485 api_server.go:131] duration metric: took 5.239358ms to wait for apiserver health ...
	I0818 20:13:48.645313   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:48.829655   74485 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:48.829698   74485 system_pods.go:61] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:48.829706   74485 system_pods.go:61] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:48.829718   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:48.829726   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:48.829731   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:48.829737   74485 system_pods.go:61] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:48.829742   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:48.829754   74485 system_pods.go:61] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:48.829760   74485 system_pods.go:61] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:48.829770   74485 system_pods.go:74] duration metric: took 184.451133ms to wait for pod list to return data ...
	I0818 20:13:48.829783   74485 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:49.023954   74485 default_sa.go:45] found service account: "default"
	I0818 20:13:49.023982   74485 default_sa.go:55] duration metric: took 194.191689ms for default service account to be created ...
	I0818 20:13:49.023992   74485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:49.227864   74485 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:49.227892   74485 system_pods.go:89] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:49.227898   74485 system_pods.go:89] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:49.227902   74485 system_pods.go:89] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:49.227907   74485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:49.227911   74485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:49.227915   74485 system_pods.go:89] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:49.227918   74485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:49.227925   74485 system_pods.go:89] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:49.227930   74485 system_pods.go:89] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:49.227936   74485 system_pods.go:126] duration metric: took 203.939768ms to wait for k8s-apps to be running ...
	I0818 20:13:49.227945   74485 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:49.227989   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:49.242762   74485 system_svc.go:56] duration metric: took 14.808746ms WaitForService to wait for kubelet
	I0818 20:13:49.242793   74485 kubeadm.go:582] duration metric: took 12.856565711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:49.242819   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:49.425517   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:49.425543   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:49.425555   74485 node_conditions.go:105] duration metric: took 182.731125ms to run NodePressure ...
	I0818 20:13:49.425569   74485 start.go:241] waiting for startup goroutines ...
	I0818 20:13:49.425577   74485 start.go:246] waiting for cluster config update ...
	I0818 20:13:49.425588   74485 start.go:255] writing updated cluster config ...
	I0818 20:13:49.425898   74485 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:49.473176   74485 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:49.475285   74485 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-852598" cluster and "default" namespace by default
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 
	
	
	==> CRI-O <==
	Aug 18 20:22:46 no-preload-944426 crio[733]: time="2024-08-18 20:22:46.979255896Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:41290bd918a40cba9586457e308d1963be9115ed610220241526b7555330c1aa,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-vqsgw,Uid:0e4e228f-22e6-4b65-a49f-ea58560346a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011770789285157,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-vqsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4e228f-22e6-4b65-a49f-ea58560346a5,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T20:09:14.906156853Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63c8289ef6722c4074900368c9e398a1fd3499c4980bb8d13ab862abc4347f1c,Metadata:&PodSandboxMetadata{Name:busybox,Uid:8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,Namespace:default,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1724011770787977497,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T20:09:14.906143994Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39a451ae34f5e81f83c401337d6ce0c82c47916f4032c0fc3c5685ac81235908,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-mhhbp,Uid:2541855e-1597-4465-b244-d0d790fe4f6b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011762990714277,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-mhhbp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2541855e-1597-4465-b244-d0d790fe4f6b,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T20:09:14.9
06159233Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b159448e-15bd-4eb0-bd7f-ddba779588fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011755219562845,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588fd,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-18T20:09:14.906155762Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f365c38b6aad68b37ddddaef8e49f68b4dfc430320f54d3e9e9b3487afb6405e,Metadata:&PodSandboxMetadata{Name:kube-proxy-2l6g8,Uid:ab70884b-4b6b-4ebc-ae54-0b3216dcae47,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011755217541053,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-2l6g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab70884b-4b6b-4ebc-ae54-0b3216dcae47,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-08-18T20:09:14.906153193Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:626763774bb386c6a121bb14e97b3118e204f240e6a4e07766afcec4d57ade92,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-944426,Uid:b7aa70319472b0369a7d6acd78abc4bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011751418600773,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aa70319472b0369a7d6acd78abc4bf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.228:8443,kubernetes.io/config.hash: b7aa70319472b0369a7d6acd78abc4bf,kubernetes.io/config.seen: 2024-08-18T20:09:10.918939923Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3313973c0817cf9b495a2f94bd413763eb525274b7db8be6d975f77da6b09381,Metadata:&PodSandboxMetadata{N
ame:kube-scheduler-no-preload-944426,Uid:99ffdac6cc9e86317bcefcc303571087,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011751412751138,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ffdac6cc9e86317bcefcc303571087,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 99ffdac6cc9e86317bcefcc303571087,kubernetes.io/config.seen: 2024-08-18T20:09:10.918945962Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:451e97000001228d673d33afd0cb2888d56c141a2b6e06cd208bdfd4e6eb2c3e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-944426,Uid:a9aa3a73652c83efb96dc0fdb1df0ef5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011751404872183,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-con
troller-manager-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa3a73652c83efb96dc0fdb1df0ef5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a9aa3a73652c83efb96dc0fdb1df0ef5,kubernetes.io/config.seen: 2024-08-18T20:09:10.918944849Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a331845bb9a9f09e5072b7bb30fe851963299e962c4b4898783497bc8b1c207,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-944426,Uid:a3808e4a939d67f43502a70e686fad8f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011751398271400,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3808e4a939d67f43502a70e686fad8f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.228:2379,kubernetes.io/config.hash: a3808e4a939d67f43502a70e686fad8f,ku
bernetes.io/config.seen: 2024-08-18T20:09:10.926171598Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=fd083e8f-f1f9-4f77-8bf1-8933228b9dd6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 20:22:46 no-preload-944426 crio[733]: time="2024-08-18 20:22:46.980229348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32495c8a-4765-4080-abf7-fe7368fead1e name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:46 no-preload-944426 crio[733]: time="2024-08-18 20:22:46.980298310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32495c8a-4765-4080-abf7-fe7368fead1e name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:46 no-preload-944426 crio[733]: time="2024-08-18 20:22:46.980493597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011786202865770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c43757b6fe324da3d6c9d1fbf744fb7afd3dd2bff9c1c41eb2afd2266b9cd9,PodSandboxId:63c8289ef6722c4074900368c9e398a1fd3499c4980bb8d13ab862abc4347f1c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724011773851070525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb,PodSandboxId:41290bd918a40cba9586457e308d1963be9115ed610220241526b7555330c1aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011771073937962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vqsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4e228f-22e6-4b65-a49f-ea58560346a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4,PodSandboxId:f365c38b6aad68b37ddddaef8e49f68b4dfc430320f54d3e9e9b3487afb6405e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011755355188280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2l6g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab70884b-4b6b-4ebc-ae
54-0b3216dcae47,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600,PodSandboxId:1a331845bb9a9f09e5072b7bb30fe851963299e962c4b4898783497bc8b1c207,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011751655760310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3808e4a939d67f43502a70e686fad8f,},Annotations:map[string]str
ing{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741,PodSandboxId:3313973c0817cf9b495a2f94bd413763eb525274b7db8be6d975f77da6b09381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011751711343998,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ffdac6cc9e86317bcefcc303571087,},Annotations:map[string]string{io.kubern
etes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df,PodSandboxId:451e97000001228d673d33afd0cb2888d56c141a2b6e06cd208bdfd4e6eb2c3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011751592005658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa3a73652c83efb96dc0fdb1df0ef5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0,PodSandboxId:626763774bb386c6a121bb14e97b3118e204f240e6a4e07766afcec4d57ade92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011751614516073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aa70319472b0369a7d6acd78abc4bf,},Annotations:map[string]string{io.kuber
netes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32495c8a-4765-4080-abf7-fe7368fead1e name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.012299970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9ce9f0f-daf2-449e-aa44-01f01a61c6d9 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.012394088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9ce9f0f-daf2-449e-aa44-01f01a61c6d9 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.013321070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a431ed1-970e-45a0-943c-6a40fb291d6b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.013977907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012567013946590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a431ed1-970e-45a0-943c-6a40fb291d6b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.014454650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d002e9c-b520-4884-abc5-d34b651c4b25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.014548065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d002e9c-b520-4884-abc5-d34b651c4b25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.015017831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011786202865770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c43757b6fe324da3d6c9d1fbf744fb7afd3dd2bff9c1c41eb2afd2266b9cd9,PodSandboxId:63c8289ef6722c4074900368c9e398a1fd3499c4980bb8d13ab862abc4347f1c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724011773851070525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb,PodSandboxId:41290bd918a40cba9586457e308d1963be9115ed610220241526b7555330c1aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011771073937962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vqsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4e228f-22e6-4b65-a49f-ea58560346a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4,PodSandboxId:f365c38b6aad68b37ddddaef8e49f68b4dfc430320f54d3e9e9b3487afb6405e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011755355188280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2l6g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab70884b-4b6b-4ebc-ae
54-0b3216dcae47,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011755341248681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588f
d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600,PodSandboxId:1a331845bb9a9f09e5072b7bb30fe851963299e962c4b4898783497bc8b1c207,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011751655760310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3808e4a939d67f43502a70e686fad8f,},Annotations:map[string]string{io.kubern
etes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741,PodSandboxId:3313973c0817cf9b495a2f94bd413763eb525274b7db8be6d975f77da6b09381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011751711343998,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ffdac6cc9e86317bcefcc303571087,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df,PodSandboxId:451e97000001228d673d33afd0cb2888d56c141a2b6e06cd208bdfd4e6eb2c3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011751592005658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa3a73652c83efb96dc0fdb1df0ef5,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0,PodSandboxId:626763774bb386c6a121bb14e97b3118e204f240e6a4e07766afcec4d57ade92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011751614516073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aa70319472b0369a7d6acd78abc4bf,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d002e9c-b520-4884-abc5-d34b651c4b25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.050417578Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be3ceb58-1cd4-43b4-9c38-f51388f11a4b name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.050523009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be3ceb58-1cd4-43b4-9c38-f51388f11a4b name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.051939047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c388d461-53cb-4720-9b22-ff988917233a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.052303488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012567052282955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c388d461-53cb-4720-9b22-ff988917233a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.053023546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67c9d88a-4401-4505-928d-cd3e020c4456 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.053096043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67c9d88a-4401-4505-928d-cd3e020c4456 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.053305218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011786202865770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c43757b6fe324da3d6c9d1fbf744fb7afd3dd2bff9c1c41eb2afd2266b9cd9,PodSandboxId:63c8289ef6722c4074900368c9e398a1fd3499c4980bb8d13ab862abc4347f1c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724011773851070525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb,PodSandboxId:41290bd918a40cba9586457e308d1963be9115ed610220241526b7555330c1aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011771073937962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vqsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4e228f-22e6-4b65-a49f-ea58560346a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4,PodSandboxId:f365c38b6aad68b37ddddaef8e49f68b4dfc430320f54d3e9e9b3487afb6405e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011755355188280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2l6g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab70884b-4b6b-4ebc-ae
54-0b3216dcae47,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011755341248681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588f
d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600,PodSandboxId:1a331845bb9a9f09e5072b7bb30fe851963299e962c4b4898783497bc8b1c207,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011751655760310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3808e4a939d67f43502a70e686fad8f,},Annotations:map[string]string{io.kubern
etes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741,PodSandboxId:3313973c0817cf9b495a2f94bd413763eb525274b7db8be6d975f77da6b09381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011751711343998,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ffdac6cc9e86317bcefcc303571087,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df,PodSandboxId:451e97000001228d673d33afd0cb2888d56c141a2b6e06cd208bdfd4e6eb2c3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011751592005658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa3a73652c83efb96dc0fdb1df0ef5,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0,PodSandboxId:626763774bb386c6a121bb14e97b3118e204f240e6a4e07766afcec4d57ade92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011751614516073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aa70319472b0369a7d6acd78abc4bf,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67c9d88a-4401-4505-928d-cd3e020c4456 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.086420144Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a656bf74-c521-4b78-9c4d-3ff05e611eb5 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.086518728Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a656bf74-c521-4b78-9c4d-3ff05e611eb5 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.088140664Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5a49761-ab14-4d3d-9b61-55b561e6d798 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.088500780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012567088476843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5a49761-ab14-4d3d-9b61-55b561e6d798 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.089016054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e18d106e-a079-48c7-88c9-e73766edc068 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.089066591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e18d106e-a079-48c7-88c9-e73766edc068 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:47 no-preload-944426 crio[733]: time="2024-08-18 20:22:47.089252150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011786202865770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c43757b6fe324da3d6c9d1fbf744fb7afd3dd2bff9c1c41eb2afd2266b9cd9,PodSandboxId:63c8289ef6722c4074900368c9e398a1fd3499c4980bb8d13ab862abc4347f1c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724011773851070525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb,PodSandboxId:41290bd918a40cba9586457e308d1963be9115ed610220241526b7555330c1aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011771073937962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vqsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4e228f-22e6-4b65-a49f-ea58560346a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4,PodSandboxId:f365c38b6aad68b37ddddaef8e49f68b4dfc430320f54d3e9e9b3487afb6405e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011755355188280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2l6g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab70884b-4b6b-4ebc-ae
54-0b3216dcae47,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011755341248681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588f
d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600,PodSandboxId:1a331845bb9a9f09e5072b7bb30fe851963299e962c4b4898783497bc8b1c207,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011751655760310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3808e4a939d67f43502a70e686fad8f,},Annotations:map[string]string{io.kubern
etes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741,PodSandboxId:3313973c0817cf9b495a2f94bd413763eb525274b7db8be6d975f77da6b09381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011751711343998,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ffdac6cc9e86317bcefcc303571087,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df,PodSandboxId:451e97000001228d673d33afd0cb2888d56c141a2b6e06cd208bdfd4e6eb2c3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011751592005658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa3a73652c83efb96dc0fdb1df0ef5,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0,PodSandboxId:626763774bb386c6a121bb14e97b3118e204f240e6a4e07766afcec4d57ade92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011751614516073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aa70319472b0369a7d6acd78abc4bf,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e18d106e-a079-48c7-88c9-e73766edc068 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3bb0cae57195c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   9a4f1cd9d0876       storage-provisioner
	a9c43757b6fe3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   63c8289ef6722       busybox
	c0a76eb785f5c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   41290bd918a40       coredns-6f6b679f8f-vqsgw
	6d66c800d25d3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   f365c38b6aad6       kube-proxy-2l6g8
	ad65c84a94b18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   9a4f1cd9d0876       storage-provisioner
	38c187ad4ff35       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   3313973c0817c       kube-scheduler-no-preload-944426
	7260b47bfedc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   1a331845bb9a9       etcd-no-preload-944426
	568c722ae9e2f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   626763774bb38       kube-apiserver-no-preload-944426
	fb1a81f2aed91       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   451e970000012       kube-controller-manager-no-preload-944426
	
	
	==> coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54741 - 41316 "HINFO IN 4776076796205361173.4031226827159274279. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014000429s
	
	
	==> describe nodes <==
	Name:               no-preload-944426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-944426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=no-preload-944426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T19_59_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-944426
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 20:22:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 20:19:57 +0000   Sun, 18 Aug 2024 19:59:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 20:19:57 +0000   Sun, 18 Aug 2024 19:59:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 20:19:57 +0000   Sun, 18 Aug 2024 19:59:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 20:19:57 +0000   Sun, 18 Aug 2024 20:09:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.228
	  Hostname:    no-preload-944426
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba8c2789be914935b15347b81090b285
	  System UUID:                ba8c2789-be91-4935-b153-47b81090b285
	  Boot ID:                    89a85078-3e0f-4f58-977e-2125e57c6b90
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-6f6b679f8f-vqsgw                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	  kube-system                 etcd-no-preload-944426                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kube-apiserver-no-preload-944426             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-no-preload-944426    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-2l6g8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-no-preload-944426             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 metrics-server-6867b74b74-mhhbp              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node no-preload-944426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node no-preload-944426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-944426 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-944426 status is now: NodeHasSufficientMemory
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-944426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-944426 status is now: NodeHasSufficientPID
	  Normal  NodeReady                23m                kubelet          Node no-preload-944426 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-944426 event: Registered Node no-preload-944426 in Controller
	  Normal  CIDRAssignmentFailed     23m                cidrAllocator    Node no-preload-944426 status is now: CIDRAssignmentFailed
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet          Node no-preload-944426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-944426 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  13m (x9 over 13m)  kubelet          Node no-preload-944426 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-944426 event: Registered Node no-preload-944426 in Controller
	
	
	==> dmesg <==
	[Aug18 20:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050366] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042060] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.041370] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.678250] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625791] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.910269] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.059422] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059820] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.162580] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.157073] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +0.289607] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[Aug18 20:09] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.060877] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.267087] systemd-fstab-generator[1437]: Ignoring "noauto" option for root device
	[  +4.591208] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.927650] systemd-fstab-generator[2071]: Ignoring "noauto" option for root device
	[  +4.798258] kauditd_printk_skb: 58 callbacks suppressed
	[  +7.800521] kauditd_printk_skb: 8 callbacks suppressed
	[ +15.433906] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] <==
	{"level":"info","ts":"2024-08-18T20:09:12.505150Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:09:12.507155Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-18T20:09:12.519908Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"cda7d178093df040","initial-advertise-peer-urls":["https://192.168.61.228:2380"],"listen-peer-urls":["https://192.168.61.228:2380"],"advertise-client-urls":["https://192.168.61.228:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.228:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-18T20:09:12.522418Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T20:09:12.507857Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.228:2380"}
	{"level":"info","ts":"2024-08-18T20:09:12.522740Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.228:2380"}
	{"level":"info","ts":"2024-08-18T20:09:13.475816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-18T20:09:13.475925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-18T20:09:13.475975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 received MsgPreVoteResp from cda7d178093df040 at term 2"}
	{"level":"info","ts":"2024-08-18T20:09:13.476011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 became candidate at term 3"}
	{"level":"info","ts":"2024-08-18T20:09:13.476035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 received MsgVoteResp from cda7d178093df040 at term 3"}
	{"level":"info","ts":"2024-08-18T20:09:13.476063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 became leader at term 3"}
	{"level":"info","ts":"2024-08-18T20:09:13.476089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cda7d178093df040 elected leader cda7d178093df040 at term 3"}
	{"level":"info","ts":"2024-08-18T20:09:13.478746Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:09:13.479054Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:09:13.479386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T20:09:13.479438Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T20:09:13.478752Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"cda7d178093df040","local-member-attributes":"{Name:no-preload-944426 ClientURLs:[https://192.168.61.228:2379]}","request-path":"/0/members/cda7d178093df040/attributes","cluster-id":"a6bf8e0580476be9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T20:09:13.480102Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:09:13.480137Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:09:13.481074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.228:2379"}
	{"level":"info","ts":"2024-08-18T20:09:13.481281Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T20:19:13.512445Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":891}
	{"level":"info","ts":"2024-08-18T20:19:13.523442Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":891,"took":"10.513591ms","hash":985806228,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2764800,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-08-18T20:19:13.523566Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":985806228,"revision":891,"compact-revision":-1}
	
	
	==> kernel <==
	 20:22:47 up 14 min,  0 users,  load average: 0.08, 0.13, 0.09
	Linux no-preload-944426 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] <==
	W0818 20:19:15.771244       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:19:15.771386       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0818 20:19:15.772344       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:19:15.773456       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:20:15.773333       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:20:15.773533       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0818 20:20:15.773715       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:20:15.773795       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0818 20:20:15.774696       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:20:15.775888       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:22:15.775659       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:22:15.775941       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0818 20:22:15.776746       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:22:15.776830       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0818 20:22:15.777841       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:22:15.777903       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] <==
	E0818 20:17:18.433128       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:17:18.879042       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:17:48.439547       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:17:48.886471       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:18:18.445991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:18:18.894179       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:18:48.452448       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:18:48.902072       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:19:18.458529       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:19:18.909987       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:19:48.464718       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:19:48.917276       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:19:57.067976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-944426"
	E0818 20:20:18.471501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:20:18.925692       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:20:21.995907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="301.586µs"
	I0818 20:20:36.998249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="94.503µs"
	E0818 20:20:48.480592       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:20:48.936904       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:21:18.487318       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:21:18.944417       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:21:48.492835       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:21:48.952395       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:22:18.498983       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:22:18.960471       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 20:09:15.628159       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 20:09:15.649850       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.228"]
	E0818 20:09:15.649980       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 20:09:15.689403       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 20:09:15.689434       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 20:09:15.689464       1 server_linux.go:169] "Using iptables Proxier"
	I0818 20:09:15.698972       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 20:09:15.699802       1 server.go:483] "Version info" version="v1.31.0"
	I0818 20:09:15.699883       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 20:09:15.703939       1 config.go:197] "Starting service config controller"
	I0818 20:09:15.706026       1 config.go:104] "Starting endpoint slice config controller"
	I0818 20:09:15.706952       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 20:09:15.707284       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 20:09:15.707392       1 config.go:326] "Starting node config controller"
	I0818 20:09:15.707415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 20:09:15.807738       1 shared_informer.go:320] Caches are synced for node config
	I0818 20:09:15.807786       1 shared_informer.go:320] Caches are synced for service config
	I0818 20:09:15.807978       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] <==
	I0818 20:09:12.964007       1 serving.go:386] Generated self-signed cert in-memory
	W0818 20:09:14.752088       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 20:09:14.752133       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 20:09:14.752143       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 20:09:14.752149       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 20:09:14.783216       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 20:09:14.783349       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 20:09:14.786118       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 20:09:14.786166       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 20:09:14.786355       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 20:09:14.786459       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 20:09:14.886846       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 20:21:31 no-preload-944426 kubelet[1444]: E0818 20:21:31.177365    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012491177001192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:41 no-preload-944426 kubelet[1444]: E0818 20:21:41.179777    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012501179076806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:41 no-preload-944426 kubelet[1444]: E0818 20:21:41.180406    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012501179076806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:44 no-preload-944426 kubelet[1444]: E0818 20:21:44.980101    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:21:51 no-preload-944426 kubelet[1444]: E0818 20:21:51.181576    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012511181230335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:51 no-preload-944426 kubelet[1444]: E0818 20:21:51.183222    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012511181230335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:59 no-preload-944426 kubelet[1444]: E0818 20:21:59.979086    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:22:01 no-preload-944426 kubelet[1444]: E0818 20:22:01.184854    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012521184378406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:01 no-preload-944426 kubelet[1444]: E0818 20:22:01.185242    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012521184378406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:11 no-preload-944426 kubelet[1444]: E0818 20:22:11.008159    1444 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 20:22:11 no-preload-944426 kubelet[1444]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 20:22:11 no-preload-944426 kubelet[1444]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 20:22:11 no-preload-944426 kubelet[1444]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 20:22:11 no-preload-944426 kubelet[1444]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 20:22:11 no-preload-944426 kubelet[1444]: E0818 20:22:11.186498    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012531186149383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:11 no-preload-944426 kubelet[1444]: E0818 20:22:11.186544    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012531186149383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:12 no-preload-944426 kubelet[1444]: E0818 20:22:12.980723    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:22:21 no-preload-944426 kubelet[1444]: E0818 20:22:21.188498    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012541188120910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:21 no-preload-944426 kubelet[1444]: E0818 20:22:21.189969    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012541188120910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:24 no-preload-944426 kubelet[1444]: E0818 20:22:24.979348    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:22:31 no-preload-944426 kubelet[1444]: E0818 20:22:31.192585    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012551191854667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:31 no-preload-944426 kubelet[1444]: E0818 20:22:31.192728    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012551191854667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:38 no-preload-944426 kubelet[1444]: E0818 20:22:38.979797    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:22:41 no-preload-944426 kubelet[1444]: E0818 20:22:41.193608    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012561193378811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:41 no-preload-944426 kubelet[1444]: E0818 20:22:41.193708    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012561193378811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] <==
	I0818 20:09:46.288903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 20:09:46.304129       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 20:09:46.305048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 20:10:03.702554       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 20:10:03.702924       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-944426_cc3e55d8-a390-4aec-8905-21640048ba99!
	I0818 20:10:03.703160       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68bb3579-0737-406e-b932-37ac245a50d7", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-944426_cc3e55d8-a390-4aec-8905-21640048ba99 became leader
	I0818 20:10:03.806446       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-944426_cc3e55d8-a390-4aec-8905-21640048ba99!
	
	
	==> storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] <==
	I0818 20:09:15.440698       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0818 20:09:45.443185       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-944426 -n no-preload-944426
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-944426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mhhbp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-944426 describe pod metrics-server-6867b74b74-mhhbp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-944426 describe pod metrics-server-6867b74b74-mhhbp: exit status 1 (61.352203ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mhhbp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-944426 describe pod metrics-server-6867b74b74-mhhbp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0818 20:14:26.647053   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:14:47.092004   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:15:07.765811   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:15:13.772073   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:15:53.949700   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-18 20:22:50.006162926 +0000 UTC m=+6280.398502204
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-852598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-852598 logs -n 25: (2.16342058s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-944426             | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-868662                  | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-868662 --memory=2200 --alsologtostderr   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-291295            | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-868662 image list                           | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:02 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-852598  | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-247539        | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944426                  | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-291295                 | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:03 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-852598       | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-247539             | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:13 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 20:04:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 20:04:42.787579   74485 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:42.787666   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787673   74485 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:42.787677   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787847   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:42.788352   74485 out.go:352] Setting JSON to false
	I0818 20:04:42.789201   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6427,"bootTime":1724005056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:42.789257   74485 start.go:139] virtualization: kvm guest
	I0818 20:04:42.791538   74485 out.go:177] * [default-k8s-diff-port-852598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:42.793185   74485 notify.go:220] Checking for updates...
	I0818 20:04:42.793204   74485 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:42.794555   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:42.795955   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:42.797158   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:42.798459   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:42.799775   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:42.801373   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:04:42.801763   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.801823   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.816564   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0818 20:04:42.816964   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.817465   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.817486   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.817807   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.818015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.818224   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:42.818511   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.818540   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.832964   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0818 20:04:42.833369   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.833866   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.833895   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.834252   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.834438   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.867522   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:42.868931   74485 start.go:297] selected driver: kvm2
	I0818 20:04:42.868948   74485 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.869074   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:42.869754   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.869835   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:42.884983   74485 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:42.885345   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:42.885408   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:04:42.885421   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:42.885450   74485 start.go:340] cluster config:
	{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.885567   74485 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.887511   74485 out.go:177] * Starting "default-k8s-diff-port-852598" primary control-plane node in "default-k8s-diff-port-852598" cluster
	I0818 20:04:42.011628   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:45.083629   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:42.888803   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:04:42.888828   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:42.888834   74485 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:42.888903   74485 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:42.888913   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 20:04:42.888991   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:04:42.889163   74485 start.go:360] acquireMachinesLock for default-k8s-diff-port-852598: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:04:51.163614   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:54.235770   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:00.315808   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:03.387719   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:09.467686   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:12.539667   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:18.619652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:21.691652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:27.771635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:30.843627   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:36.923644   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:39.995678   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:46.075611   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:49.147665   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:55.227683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:58.299638   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:04.379690   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:07.451735   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:13.531669   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:16.603729   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:22.683639   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:25.755659   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:31.835708   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:34.907693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:40.987635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:44.059673   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:50.139693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:53.211683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:59.291707   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:02.363660   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:08.443634   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:11.515633   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:17.595640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:20.667689   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:26.747640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:29.819663   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:32.823816   73815 start.go:364] duration metric: took 4m30.025550701s to acquireMachinesLock for "embed-certs-291295"
	I0818 20:07:32.823869   73815 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:32.823875   73815 fix.go:54] fixHost starting: 
	I0818 20:07:32.824270   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:32.824306   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:32.839755   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0818 20:07:32.840171   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:32.840614   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:07:32.840632   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:32.840962   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:32.841160   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:32.841303   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:07:32.842786   73815 fix.go:112] recreateIfNeeded on embed-certs-291295: state=Stopped err=<nil>
	I0818 20:07:32.842814   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	W0818 20:07:32.842974   73815 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:32.844743   73815 out.go:177] * Restarting existing kvm2 VM for "embed-certs-291295" ...
	I0818 20:07:32.821304   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:32.821364   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821657   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:07:32.821683   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821904   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:07:32.823683   73711 machine.go:96] duration metric: took 4m37.430465042s to provisionDockerMachine
	I0818 20:07:32.823720   73711 fix.go:56] duration metric: took 4m37.451071449s for fixHost
	I0818 20:07:32.823727   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 4m37.451091077s
	W0818 20:07:32.823754   73711 start.go:714] error starting host: provision: host is not running
	W0818 20:07:32.823846   73711 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0818 20:07:32.823855   73711 start.go:729] Will try again in 5 seconds ...
	I0818 20:07:32.846149   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Start
	I0818 20:07:32.846317   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring networks are active...
	I0818 20:07:32.847049   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network default is active
	I0818 20:07:32.847478   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network mk-embed-certs-291295 is active
	I0818 20:07:32.847854   73815 main.go:141] libmachine: (embed-certs-291295) Getting domain xml...
	I0818 20:07:32.848748   73815 main.go:141] libmachine: (embed-certs-291295) Creating domain...
	I0818 20:07:34.053380   73815 main.go:141] libmachine: (embed-certs-291295) Waiting to get IP...
	I0818 20:07:34.054322   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.054765   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.054850   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.054751   75081 retry.go:31] will retry after 299.809444ms: waiting for machine to come up
	I0818 20:07:34.356537   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.356955   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.357014   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.356932   75081 retry.go:31] will retry after 366.714086ms: waiting for machine to come up
	I0818 20:07:34.725440   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.725885   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.725915   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.725839   75081 retry.go:31] will retry after 427.074526ms: waiting for machine to come up
	I0818 20:07:35.154258   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.154660   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.154682   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.154633   75081 retry.go:31] will retry after 565.117984ms: waiting for machine to come up
	I0818 20:07:35.721302   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.721729   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.721757   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.721686   75081 retry.go:31] will retry after 630.987814ms: waiting for machine to come up
	I0818 20:07:36.354566   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:36.354981   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:36.355016   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:36.354951   75081 retry.go:31] will retry after 697.865559ms: waiting for machine to come up
	I0818 20:07:37.054868   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.055232   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.055260   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.055188   75081 retry.go:31] will retry after 898.995052ms: waiting for machine to come up
	I0818 20:07:37.824187   73711 start.go:360] acquireMachinesLock for no-preload-944426: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:37.955672   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.956089   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.956115   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.956038   75081 retry.go:31] will retry after 1.482185836s: waiting for machine to come up
	I0818 20:07:39.440488   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:39.440838   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:39.440889   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:39.440794   75081 retry.go:31] will retry after 1.695604547s: waiting for machine to come up
	I0818 20:07:41.138708   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:41.139203   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:41.139231   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:41.139166   75081 retry.go:31] will retry after 1.806916927s: waiting for machine to come up
	I0818 20:07:42.947942   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:42.948344   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:42.948402   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:42.948319   75081 retry.go:31] will retry after 2.664923271s: waiting for machine to come up
	I0818 20:07:45.616102   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:45.616454   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:45.616482   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:45.616411   75081 retry.go:31] will retry after 3.460207847s: waiting for machine to come up
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:49.078185   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078646   73815 main.go:141] libmachine: (embed-certs-291295) Found IP for machine: 192.168.39.125
	I0818 20:07:49.078676   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has current primary IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078682   73815 main.go:141] libmachine: (embed-certs-291295) Reserving static IP address...
	I0818 20:07:49.079061   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.079091   73815 main.go:141] libmachine: (embed-certs-291295) Reserved static IP address: 192.168.39.125
	I0818 20:07:49.079112   73815 main.go:141] libmachine: (embed-certs-291295) DBG | skip adding static IP to network mk-embed-certs-291295 - found existing host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"}
	I0818 20:07:49.079132   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Getting to WaitForSSH function...
	I0818 20:07:49.079148   73815 main.go:141] libmachine: (embed-certs-291295) Waiting for SSH to be available...
	I0818 20:07:49.081287   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081592   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.081645   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081761   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH client type: external
	I0818 20:07:49.081788   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa (-rw-------)
	I0818 20:07:49.081823   73815 main.go:141] libmachine: (embed-certs-291295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:07:49.081841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | About to run SSH command:
	I0818 20:07:49.081854   73815 main.go:141] libmachine: (embed-certs-291295) DBG | exit 0
	I0818 20:07:49.207649   73815 main.go:141] libmachine: (embed-certs-291295) DBG | SSH cmd err, output: <nil>: 
	I0818 20:07:49.208007   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetConfigRaw
	I0818 20:07:49.208604   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.211088   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211436   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.211464   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211685   73815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:07:49.211906   73815 machine.go:93] provisionDockerMachine start ...
	I0818 20:07:49.211932   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:49.212156   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.214381   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214696   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.214722   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214838   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.215001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215139   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215264   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.215402   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.215637   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.215650   73815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:07:49.327972   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:07:49.328001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328234   73815 buildroot.go:166] provisioning hostname "embed-certs-291295"
	I0818 20:07:49.328286   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328495   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.331272   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331667   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.331695   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331795   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.331967   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332124   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332235   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.332387   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.332602   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.332620   73815 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-291295 && echo "embed-certs-291295" | sudo tee /etc/hostname
	I0818 20:07:49.457656   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-291295
	
	I0818 20:07:49.457692   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.460362   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460692   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.460724   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460821   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.461040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461269   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461419   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.461593   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.461791   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.461807   73815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-291295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-291295/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-291295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:07:49.580418   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:49.580448   73815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:07:49.580487   73815 buildroot.go:174] setting up certificates
	I0818 20:07:49.580501   73815 provision.go:84] configureAuth start
	I0818 20:07:49.580513   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.580787   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.583435   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.583801   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.583825   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.584097   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.586253   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586572   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.586606   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586700   73815 provision.go:143] copyHostCerts
	I0818 20:07:49.586764   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:07:49.586786   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:07:49.586863   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:07:49.586984   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:07:49.586994   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:07:49.587034   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:07:49.587134   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:07:49.587144   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:07:49.587182   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:07:49.587257   73815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-291295 san=[127.0.0.1 192.168.39.125 embed-certs-291295 localhost minikube]
	I0818 20:07:49.844689   73815 provision.go:177] copyRemoteCerts
	I0818 20:07:49.844745   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:07:49.844767   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.847172   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.847517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847700   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.847898   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.848060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.848210   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:49.933798   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:07:49.957958   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:07:49.981551   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:07:50.004238   73815 provision.go:87] duration metric: took 423.726052ms to configureAuth
	I0818 20:07:50.004263   73815 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:07:50.004431   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:07:50.004494   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.006759   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007031   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.007059   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007217   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.007437   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007603   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007729   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.007894   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.008058   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.008072   73815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:07:50.287001   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:07:50.287027   73815 machine.go:96] duration metric: took 1.075103653s to provisionDockerMachine
	I0818 20:07:50.287038   73815 start.go:293] postStartSetup for "embed-certs-291295" (driver="kvm2")
	I0818 20:07:50.287047   73815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:07:50.287067   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.287451   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:07:50.287478   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.290150   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290493   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.290515   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290727   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.290911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.291096   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.291233   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.379621   73815 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:07:50.388749   73815 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:07:50.388772   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:07:50.388844   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:07:50.388927   73815 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:07:50.389046   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:07:50.398957   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:50.422817   73815 start.go:296] duration metric: took 135.767247ms for postStartSetup
	I0818 20:07:50.422859   73815 fix.go:56] duration metric: took 17.598982329s for fixHost
	I0818 20:07:50.422886   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.425514   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.425899   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.425926   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.426113   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.426332   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426505   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426623   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.426798   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.427018   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.427033   73815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:07:50.540087   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011670.500173623
	
	I0818 20:07:50.540113   73815 fix.go:216] guest clock: 1724011670.500173623
	I0818 20:07:50.540122   73815 fix.go:229] Guest: 2024-08-18 20:07:50.500173623 +0000 UTC Remote: 2024-08-18 20:07:50.42286401 +0000 UTC m=+287.764343419 (delta=77.309613ms)
	I0818 20:07:50.540140   73815 fix.go:200] guest clock delta is within tolerance: 77.309613ms
	I0818 20:07:50.540145   73815 start.go:83] releasing machines lock for "embed-certs-291295", held for 17.716293127s
	I0818 20:07:50.540172   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.540462   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:50.543280   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543688   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.543721   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544386   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544639   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544698   73815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:07:50.544749   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.544889   73815 ssh_runner.go:195] Run: cat /version.json
	I0818 20:07:50.544913   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.547481   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547813   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.547841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547867   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547962   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548165   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548281   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.548307   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.548340   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548431   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548515   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.548576   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548701   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548874   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.628660   73815 ssh_runner.go:195] Run: systemctl --version
	I0818 20:07:50.653164   73815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:07:50.799158   73815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:07:50.805063   73815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:07:50.805134   73815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:07:50.820796   73815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:07:50.820822   73815 start.go:495] detecting cgroup driver to use...
	I0818 20:07:50.820901   73815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:07:50.837574   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:07:50.851913   73815 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:07:50.851981   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:07:50.865595   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:07:50.879240   73815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:07:50.990057   73815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:07:51.151540   73815 docker.go:233] disabling docker service ...
	I0818 20:07:51.151618   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:07:51.166231   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:07:51.180949   73815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:07:51.329174   73815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:07:51.460564   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:07:51.474929   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:07:51.494510   73815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:07:51.494573   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.507465   73815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:07:51.507533   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.519207   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.535742   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.551186   73815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:07:51.563233   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.574714   73815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.597948   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.609883   73815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:07:51.621040   73815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:07:51.621115   73815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:07:51.636305   73815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:07:51.646895   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:51.781890   73815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:07:51.927722   73815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:07:51.927799   73815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:07:51.932918   73815 start.go:563] Will wait 60s for crictl version
	I0818 20:07:51.933006   73815 ssh_runner.go:195] Run: which crictl
	I0818 20:07:51.936917   73815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:07:51.981063   73815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:07:51.981141   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.008566   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.041182   73815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:07:52.042348   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:52.045196   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045559   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:52.045588   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045764   73815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 20:07:52.050188   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:52.065105   73815 kubeadm.go:883] updating cluster {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:07:52.065244   73815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:07:52.065300   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:52.108608   73815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:07:52.108687   73815 ssh_runner.go:195] Run: which lz4
	I0818 20:07:52.112897   73815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:07:52.117388   73815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:07:52.117421   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:53.532527   73815 crio.go:462] duration metric: took 1.419668609s to copy over tarball
	I0818 20:07:53.532605   73815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:07:55.664780   73815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132141788s)
	I0818 20:07:55.664810   73815 crio.go:469] duration metric: took 2.132257968s to extract the tarball
	I0818 20:07:55.664820   73815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:07:55.702662   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:55.745782   73815 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:07:55.745801   73815 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:07:55.745809   73815 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0818 20:07:55.745921   73815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-291295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:07:55.745985   73815 ssh_runner.go:195] Run: crio config
	I0818 20:07:55.788458   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:07:55.788484   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:07:55.788503   73815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:07:55.788537   73815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-291295 NodeName:embed-certs-291295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:07:55.788723   73815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-291295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:07:55.788800   73815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:07:55.798787   73815 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:07:55.798860   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:07:55.808532   73815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0818 20:07:55.825731   73815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:07:55.842287   73815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0818 20:07:55.860058   73815 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0818 20:07:55.864007   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:55.876297   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:55.999076   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:07:56.015305   73815 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295 for IP: 192.168.39.125
	I0818 20:07:56.015325   73815 certs.go:194] generating shared ca certs ...
	I0818 20:07:56.015339   73815 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:07:56.015505   73815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:07:56.015548   73815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:07:56.015557   73815 certs.go:256] generating profile certs ...
	I0818 20:07:56.015633   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/client.key
	I0818 20:07:56.015689   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key.a8bddcfe
	I0818 20:07:56.015732   73815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key
	I0818 20:07:56.015846   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:07:56.015885   73815 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:07:56.015898   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:07:56.015953   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:07:56.015979   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:07:56.015999   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:07:56.016036   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:56.016660   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:07:56.044323   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:07:56.079231   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:07:56.111738   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:07:56.134817   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0818 20:07:56.160819   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:07:56.185806   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:07:56.210116   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:07:56.234185   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:07:56.256896   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:07:56.279505   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:07:56.302178   73815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:07:56.318931   73815 ssh_runner.go:195] Run: openssl version
	I0818 20:07:56.324865   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:07:56.336272   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340825   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340872   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.346515   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:07:56.357471   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:07:56.368211   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372600   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372662   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.378152   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:07:56.388868   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:07:56.399297   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403628   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403663   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.409041   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:07:56.419342   73815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:07:56.423757   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:07:56.429341   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:07:56.435012   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:07:56.440752   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:07:56.446305   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:07:56.452219   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:07:56.458004   73815 kubeadm.go:392] StartCluster: {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:07:56.458133   73815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:07:56.458181   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.495200   73815 cri.go:89] found id: ""
	I0818 20:07:56.495281   73815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:07:56.505834   73815 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:07:56.505854   73815 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:07:56.505903   73815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:07:56.516025   73815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:07:56.516962   73815 kubeconfig.go:125] found "embed-certs-291295" server: "https://192.168.39.125:8443"
	I0818 20:07:56.518789   73815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:07:56.528513   73815 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0818 20:07:56.528541   73815 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:07:56.528556   73815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:07:56.528612   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.568091   73815 cri.go:89] found id: ""
	I0818 20:07:56.568161   73815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:07:56.584012   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:07:56.593697   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:07:56.593712   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:07:56.593746   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:07:56.603071   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:07:56.603112   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:07:56.612422   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:07:56.621194   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:07:56.621243   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:07:56.630252   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.640086   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:07:56.640138   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.649323   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:07:56.658055   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:07:56.658110   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:07:56.667134   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:07:56.676460   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.783806   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.515850   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:07:57.710065   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.780213   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.854365   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:07:57.854458   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.355246   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.854602   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.355211   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.854991   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.354593   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.368818   73815 api_server.go:72] duration metric: took 2.514473789s to wait for apiserver process to appear ...
	I0818 20:08:00.368844   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:00.368866   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.832413   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:02.832449   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:02.832466   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.924768   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.924804   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:02.924820   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.929839   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.929869   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.369350   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.373766   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.373796   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.869333   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.874889   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.874919   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:04.369187   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:04.374739   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:08:04.383736   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:04.383764   73815 api_server.go:131] duration metric: took 4.014913233s to wait for apiserver health ...
	I0818 20:08:04.383773   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:08:04.383779   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:04.385486   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:04.386800   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:04.403476   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:04.422354   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:04.435181   73815 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:04.435222   73815 system_pods.go:61] "coredns-6f6b679f8f-wvd9k" [02369649-1565-437d-8b19-a67adfe13d45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:04.435237   73815 system_pods.go:61] "etcd-embed-certs-291295" [1e9f0b7d-bb65-4867-821e-b9af34338b3e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:04.435246   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [bb884a00-e058-4348-bc6a-427c64f4c68d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:04.435261   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [3a359998-cdb6-46ef-a018-e03e70cb33e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:04.435269   73815 system_pods.go:61] "kube-proxy-5fjm2" [bb15b1d9-8221-473a-b0c7-8c65b3b18bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 20:08:04.435276   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [4ed7725a-b0e6-4bc0-b0bd-913eb15fd4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:04.435287   73815 system_pods.go:61] "metrics-server-6867b74b74-g2kt7" [c23cc238-51f0-402c-a0c1-4aecc020d845] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:04.435294   73815 system_pods.go:61] "storage-provisioner" [2dcad3a1-15f0-41b9-8398-5a6e2d8763b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 20:08:04.435303   73815 system_pods.go:74] duration metric: took 12.928394ms to wait for pod list to return data ...
	I0818 20:08:04.435314   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:04.439127   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:04.439150   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:04.439161   73815 node_conditions.go:105] duration metric: took 3.84281ms to run NodePressure ...
	I0818 20:08:04.439176   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:04.720705   73815 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726814   73815 kubeadm.go:739] kubelet initialised
	I0818 20:08:04.726835   73815 kubeadm.go:740] duration metric: took 6.104356ms waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726843   73815 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:04.736000   73815 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.741473   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741509   73815 pod_ready.go:82] duration metric: took 5.472852ms for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.741523   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741534   73815 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.749841   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749872   73815 pod_ready.go:82] duration metric: took 8.326743ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.749883   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749891   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.756947   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.756997   73815 pod_ready.go:82] duration metric: took 7.079861ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.757011   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.757019   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.825829   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825865   73815 pod_ready.go:82] duration metric: took 68.834734ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.825878   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825888   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225761   73815 pod_ready.go:93] pod "kube-proxy-5fjm2" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:05.225786   73815 pod_ready.go:82] duration metric: took 399.888138ms for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225796   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:07.232250   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.744305   74485 start.go:364] duration metric: took 3m27.85511004s to acquireMachinesLock for "default-k8s-diff-port-852598"
	I0818 20:08:10.744365   74485 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:10.744384   74485 fix.go:54] fixHost starting: 
	I0818 20:08:10.744751   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:10.744791   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:10.764317   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0818 20:08:10.764799   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:10.765323   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:08:10.765349   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:10.765723   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:10.765929   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:10.766110   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:08:10.767735   74485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-852598: state=Stopped err=<nil>
	I0818 20:08:10.767763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	W0818 20:08:10.767931   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:10.770197   74485 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-852598" ...
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:09.234086   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:11.732954   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.771461   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Start
	I0818 20:08:10.771638   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring networks are active...
	I0818 20:08:10.772332   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network default is active
	I0818 20:08:10.772645   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network mk-default-k8s-diff-port-852598 is active
	I0818 20:08:10.773119   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Getting domain xml...
	I0818 20:08:10.773840   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Creating domain...
	I0818 20:08:12.058765   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting to get IP...
	I0818 20:08:12.059745   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060236   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.060152   75353 retry.go:31] will retry after 227.793826ms: waiting for machine to come up
	I0818 20:08:12.289622   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290038   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290061   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.290013   75353 retry.go:31] will retry after 288.501286ms: waiting for machine to come up
	I0818 20:08:12.580672   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581158   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.581120   75353 retry.go:31] will retry after 460.489481ms: waiting for machine to come up
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:13.736819   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:14.732761   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:14.732783   73815 pod_ready.go:82] duration metric: took 9.506980075s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:14.732792   73815 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:16.739855   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:13.042839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043444   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043475   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.043413   75353 retry.go:31] will retry after 542.076458ms: waiting for machine to come up
	I0818 20:08:13.586675   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587296   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587326   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.587216   75353 retry.go:31] will retry after 553.588704ms: waiting for machine to come up
	I0818 20:08:14.142076   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142714   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142737   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.142616   75353 retry.go:31] will retry after 852.179264ms: waiting for machine to come up
	I0818 20:08:14.996732   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997226   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997258   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.997175   75353 retry.go:31] will retry after 732.180291ms: waiting for machine to come up
	I0818 20:08:15.731247   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731741   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731771   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:15.731699   75353 retry.go:31] will retry after 1.456328641s: waiting for machine to come up
	I0818 20:08:17.189586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190071   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:17.189997   75353 retry.go:31] will retry after 1.632315907s: waiting for machine to come up
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:18.740885   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:21.239526   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:18.824458   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824827   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824859   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:18.824772   75353 retry.go:31] will retry after 2.077122736s: waiting for machine to come up
	I0818 20:08:20.903734   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904176   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904203   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:20.904139   75353 retry.go:31] will retry after 1.975638775s: waiting for machine to come up
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.239765   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:25.739127   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:22.882020   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882511   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:22.882450   75353 retry.go:31] will retry after 3.362090127s: waiting for machine to come up
	I0818 20:08:26.246148   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246523   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246547   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:26.246479   75353 retry.go:31] will retry after 3.188423251s: waiting for machine to come up
	I0818 20:08:30.732227   73711 start.go:364] duration metric: took 52.90798246s to acquireMachinesLock for "no-preload-944426"
	I0818 20:08:30.732291   73711 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:30.732302   73711 fix.go:54] fixHost starting: 
	I0818 20:08:30.732702   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:30.732738   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:30.749873   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0818 20:08:30.750371   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:30.750922   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:08:30.750951   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:30.751323   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:30.751547   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:30.751748   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:08:30.753437   73711 fix.go:112] recreateIfNeeded on no-preload-944426: state=Stopped err=<nil>
	I0818 20:08:30.753460   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	W0818 20:08:30.753623   73711 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:30.756026   73711 out.go:177] * Restarting existing kvm2 VM for "no-preload-944426" ...
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.438706   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439209   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Found IP for machine: 192.168.72.111
	I0818 20:08:29.439225   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserving static IP address...
	I0818 20:08:29.439241   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has current primary IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439712   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.439740   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | skip adding static IP to network mk-default-k8s-diff-port-852598 - found existing host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"}
	I0818 20:08:29.439754   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserved static IP address: 192.168.72.111
	I0818 20:08:29.439769   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for SSH to be available...
	I0818 20:08:29.439786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Getting to WaitForSSH function...
	I0818 20:08:29.442039   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442351   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.442378   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442515   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH client type: external
	I0818 20:08:29.442545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa (-rw-------)
	I0818 20:08:29.442569   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:29.442580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | About to run SSH command:
	I0818 20:08:29.442592   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | exit 0
	I0818 20:08:29.567586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:29.567935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetConfigRaw
	I0818 20:08:29.568553   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.570763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571150   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.571183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571367   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:08:29.571585   74485 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:29.571608   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:29.571839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.574102   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574560   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.574598   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574753   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.574920   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575219   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.575421   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.575610   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.575623   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:29.683677   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:29.683705   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.683980   74485 buildroot.go:166] provisioning hostname "default-k8s-diff-port-852598"
	I0818 20:08:29.684010   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.684210   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.687062   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687490   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.687518   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687656   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.687817   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.687954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.688105   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.688270   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.688444   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.688457   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-852598 && echo "default-k8s-diff-port-852598" | sudo tee /etc/hostname
	I0818 20:08:29.810790   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-852598
	
	I0818 20:08:29.810821   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.813448   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.813868   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.814159   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814322   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814457   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.814613   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.814821   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.814847   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-852598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-852598/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-852598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:29.934730   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:29.934762   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:29.934818   74485 buildroot.go:174] setting up certificates
	I0818 20:08:29.934834   74485 provision.go:84] configureAuth start
	I0818 20:08:29.934848   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.935133   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.938004   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938365   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.938385   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938612   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.940910   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941267   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.941298   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941376   74485 provision.go:143] copyHostCerts
	I0818 20:08:29.941429   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:29.941446   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:29.941498   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:29.941583   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:29.941591   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:29.941609   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:29.941657   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:29.941664   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:29.941683   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:29.941726   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-852598 san=[127.0.0.1 192.168.72.111 default-k8s-diff-port-852598 localhost minikube]
	I0818 20:08:30.047223   74485 provision.go:177] copyRemoteCerts
	I0818 20:08:30.047284   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:30.047310   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.049891   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050165   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.050195   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050394   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.050580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.050750   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.050910   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.133873   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:30.158887   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0818 20:08:30.183930   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 20:08:30.208851   74485 provision.go:87] duration metric: took 274.002401ms to configureAuth
	I0818 20:08:30.208888   74485 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:30.209075   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:30.209144   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.211913   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212274   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.212305   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212521   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.212718   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.212897   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.213060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.213313   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.213531   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.213564   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:30.490496   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:30.490524   74485 machine.go:96] duration metric: took 918.924484ms to provisionDockerMachine
	I0818 20:08:30.490541   74485 start.go:293] postStartSetup for "default-k8s-diff-port-852598" (driver="kvm2")
	I0818 20:08:30.490555   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:30.490576   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.490879   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:30.490904   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.493538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.493863   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.493894   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.494015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.494211   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.494367   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.494513   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.582020   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:30.586488   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:30.586510   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:30.586568   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:30.586656   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:30.586743   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:30.595907   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:30.619808   74485 start.go:296] duration metric: took 129.254668ms for postStartSetup
	I0818 20:08:30.619842   74485 fix.go:56] duration metric: took 19.875457987s for fixHost
	I0818 20:08:30.619861   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.622487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622802   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.622836   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.623181   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623338   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623489   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.623663   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.623819   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.623829   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:30.732011   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011710.692571104
	
	I0818 20:08:30.732033   74485 fix.go:216] guest clock: 1724011710.692571104
	I0818 20:08:30.732040   74485 fix.go:229] Guest: 2024-08-18 20:08:30.692571104 +0000 UTC Remote: 2024-08-18 20:08:30.619845545 +0000 UTC m=+227.865652589 (delta=72.725559ms)
	I0818 20:08:30.732088   74485 fix.go:200] guest clock delta is within tolerance: 72.725559ms
	I0818 20:08:30.732098   74485 start.go:83] releasing machines lock for "default-k8s-diff-port-852598", held for 19.987759602s
	I0818 20:08:30.732126   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.732380   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:30.735249   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735696   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.735724   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735987   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736665   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736886   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736961   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:30.737002   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.737212   74485 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:30.737240   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.740016   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740246   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740447   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740470   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740646   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740650   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740739   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740949   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740956   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741415   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741427   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741608   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.741699   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.821128   74485 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:30.848919   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:30.997885   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:31.004578   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:31.004656   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:31.023770   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:31.023801   74485 start.go:495] detecting cgroup driver to use...
	I0818 20:08:31.023873   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:31.040507   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:31.054848   74485 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:31.054901   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:31.069584   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:31.089532   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:31.214560   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:31.394507   74485 docker.go:233] disabling docker service ...
	I0818 20:08:31.394571   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:31.411295   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:31.427312   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:31.547148   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:31.669942   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:31.686214   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:31.711412   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:31.711474   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.723281   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:31.723346   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.735488   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.748029   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.762456   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:31.779045   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.793816   74485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.816892   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.829236   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:31.842943   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:31.843000   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:31.858422   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:31.870179   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:32.003783   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:32.160300   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:32.160368   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:32.165424   74485 start.go:563] Will wait 60s for crictl version
	I0818 20:08:32.165472   74485 ssh_runner.go:195] Run: which crictl
	I0818 20:08:32.169268   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:32.211667   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:32.211758   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.242366   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.272343   74485 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:27.739698   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:30.239242   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.240089   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.273652   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:32.277017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277362   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:32.277395   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277654   74485 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:32.282225   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:32.306870   74485 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:32.306980   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:32.307040   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:32.350393   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:32.350473   74485 ssh_runner.go:195] Run: which lz4
	I0818 20:08:32.355129   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:32.359816   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:32.359839   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:08:30.757329   73711 main.go:141] libmachine: (no-preload-944426) Calling .Start
	I0818 20:08:30.757514   73711 main.go:141] libmachine: (no-preload-944426) Ensuring networks are active...
	I0818 20:08:30.758286   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network default is active
	I0818 20:08:30.758667   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network mk-no-preload-944426 is active
	I0818 20:08:30.759084   73711 main.go:141] libmachine: (no-preload-944426) Getting domain xml...
	I0818 20:08:30.759889   73711 main.go:141] libmachine: (no-preload-944426) Creating domain...
	I0818 20:08:32.064235   73711 main.go:141] libmachine: (no-preload-944426) Waiting to get IP...
	I0818 20:08:32.065149   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.065617   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.065693   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.065614   75550 retry.go:31] will retry after 223.046315ms: waiting for machine to come up
	I0818 20:08:32.290000   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.290486   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.290517   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.290460   75550 retry.go:31] will retry after 359.595476ms: waiting for machine to come up
	I0818 20:08:32.652293   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.652922   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.652953   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.652891   75550 retry.go:31] will retry after 355.131428ms: waiting for machine to come up
	I0818 20:08:33.009174   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.009664   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.009692   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.009620   75550 retry.go:31] will retry after 433.765107ms: waiting for machine to come up
	I0818 20:08:33.445297   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.446028   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.446057   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.446005   75550 retry.go:31] will retry after 547.853366ms: waiting for machine to come up
	I0818 20:08:33.995808   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.996537   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.996569   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.996500   75550 retry.go:31] will retry after 830.882652ms: waiting for machine to come up
	I0818 20:08:34.828636   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:34.829139   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:34.829169   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:34.829088   75550 retry.go:31] will retry after 1.034176215s: waiting for machine to come up
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.240826   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:36.740440   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:33.831827   74485 crio.go:462] duration metric: took 1.476738272s to copy over tarball
	I0818 20:08:33.831892   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:36.080107   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.24818669s)
	I0818 20:08:36.080141   74485 crio.go:469] duration metric: took 2.248285769s to extract the tarball
	I0818 20:08:36.080159   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:36.120912   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:36.170431   74485 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:08:36.170455   74485 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:08:36.170463   74485 kubeadm.go:934] updating node { 192.168.72.111 8444 v1.31.0 crio true true} ...
	I0818 20:08:36.170563   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-852598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:36.170628   74485 ssh_runner.go:195] Run: crio config
	I0818 20:08:36.215464   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:36.215491   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:36.215504   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:36.215528   74485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-852598 NodeName:default-k8s-diff-port-852598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:08:36.215652   74485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-852598"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:36.215718   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:08:36.227163   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:36.227254   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:36.237577   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0818 20:08:36.254898   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:36.273530   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0818 20:08:36.290824   74485 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:36.294542   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:36.306822   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:36.443673   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:36.461205   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598 for IP: 192.168.72.111
	I0818 20:08:36.461232   74485 certs.go:194] generating shared ca certs ...
	I0818 20:08:36.461252   74485 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:36.461420   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:36.461492   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:36.461505   74485 certs.go:256] generating profile certs ...
	I0818 20:08:36.461621   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/client.key
	I0818 20:08:36.461717   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key.44a0f5ad
	I0818 20:08:36.461783   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key
	I0818 20:08:36.461930   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:36.461983   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:36.461998   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:36.462026   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:36.462077   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:36.462112   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:36.462167   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:36.462916   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:36.512610   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:36.558616   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:36.595755   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:36.638264   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 20:08:36.669336   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:08:36.692480   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:36.717235   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:08:36.742220   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:36.765505   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:36.789279   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:36.813777   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:36.831256   74485 ssh_runner.go:195] Run: openssl version
	I0818 20:08:36.837184   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:36.848123   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853030   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853089   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.859016   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:36.871084   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:36.882581   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.888943   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.889008   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.896841   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:36.911762   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:36.923029   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.927982   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.928039   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.934165   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:36.946794   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:36.951686   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:36.957905   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:36.964071   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:36.970369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:36.976369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:36.982386   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:36.988286   74485 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:36.988382   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:36.988433   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.036383   74485 cri.go:89] found id: ""
	I0818 20:08:37.036472   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:37.047135   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:37.047159   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:37.047204   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:37.058133   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:37.059236   74485 kubeconfig.go:125] found "default-k8s-diff-port-852598" server: "https://192.168.72.111:8444"
	I0818 20:08:37.061368   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:37.072922   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0818 20:08:37.072961   74485 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:37.072975   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:37.073035   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.120622   74485 cri.go:89] found id: ""
	I0818 20:08:37.120713   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:37.138564   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:37.149091   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:37.149114   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:37.149167   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:08:37.160298   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:37.160364   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:37.170717   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:08:37.180261   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:37.180337   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:37.190466   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.200331   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:37.200407   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.210729   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:08:37.220302   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:37.220379   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:37.230616   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:37.241303   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:37.365964   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:35.865644   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:35.866148   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:35.866176   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:35.866094   75550 retry.go:31] will retry after 1.30047863s: waiting for machine to come up
	I0818 20:08:37.168446   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:37.168947   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:37.168985   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:37.168886   75550 retry.go:31] will retry after 1.143148547s: waiting for machine to come up
	I0818 20:08:38.314142   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:38.314622   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:38.314645   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:38.314568   75550 retry.go:31] will retry after 2.106630797s: waiting for machine to come up
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.240817   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:41.741780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:38.322305   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.523945   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.627637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.794218   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:38.794298   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.295075   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.795095   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.810749   74485 api_server.go:72] duration metric: took 1.016560665s to wait for apiserver process to appear ...
	I0818 20:08:39.810778   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:39.810802   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:39.811324   74485 api_server.go:269] stopped: https://192.168.72.111:8444/healthz: Get "https://192.168.72.111:8444/healthz": dial tcp 192.168.72.111:8444: connect: connection refused
	I0818 20:08:40.311081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.309160   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:42.309190   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:42.309206   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.364083   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.364123   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:42.364148   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.370890   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.370918   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:40.423364   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:40.423886   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:40.423909   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:40.423851   75550 retry.go:31] will retry after 2.350918177s: waiting for machine to come up
	I0818 20:08:42.776801   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:42.777407   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:42.777440   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:42.777361   75550 retry.go:31] will retry after 3.529824243s: waiting for machine to come up
	I0818 20:08:42.815322   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.823702   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.823738   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.311540   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.317503   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.317537   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.810955   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.816976   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.817005   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.311718   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.316009   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.316038   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.811634   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.816069   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.816095   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.311732   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.317099   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:45.317122   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.811063   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.815319   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:08:45.821699   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:45.821728   74485 api_server.go:131] duration metric: took 6.010942001s to wait for apiserver health ...
	I0818 20:08:45.821739   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:45.821774   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:45.823968   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.239827   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:46.240539   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:45.825235   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:45.836398   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:45.854746   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:45.866305   74485 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:45.866335   74485 system_pods.go:61] "coredns-6f6b679f8f-zfdn9" [8ed412a0-912d-4619-a2d8-2378f921037b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:45.866344   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [efa18356-f8dd-4fe4-acc6-59f859e7becf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:45.866351   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [b92f2056-c5b6-4a2f-8519-a83b2350866f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:45.866359   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [7eb6a474-891d-442e-bd85-4ca766312f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:45.866365   74485 system_pods.go:61] "kube-proxy-h8bpj" [472e231d-df71-44d6-8873-23d7e43d43d2] Running
	I0818 20:08:45.866375   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [43dccb14-0125-4d48-9537-8a87c865b586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:45.866381   74485 system_pods.go:61] "metrics-server-6867b74b74-brqj6" [de1c0894-2b42-4728-bf63-bea36c5aa0d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:45.866387   74485 system_pods.go:61] "storage-provisioner" [41499d9e-d3cf-4dbc-9464-998a1f2c6186] Running
	I0818 20:08:45.866395   74485 system_pods.go:74] duration metric: took 11.62616ms to wait for pod list to return data ...
	I0818 20:08:45.866411   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:45.870540   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:45.870564   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:45.870578   74485 node_conditions.go:105] duration metric: took 4.15805ms to run NodePressure ...
	I0818 20:08:45.870597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:46.138555   74485 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142738   74485 kubeadm.go:739] kubelet initialised
	I0818 20:08:46.142758   74485 kubeadm.go:740] duration metric: took 4.173219ms waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142765   74485 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:46.147199   74485 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.151726   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151751   74485 pod_ready.go:82] duration metric: took 4.528706ms for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.151762   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151770   74485 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.155962   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.155984   74485 pod_ready.go:82] duration metric: took 4.203038ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.155996   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.156002   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.159739   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159759   74485 pod_ready.go:82] duration metric: took 3.749616ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.159769   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159777   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.309056   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:46.309441   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:46.309470   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:46.309395   75550 retry.go:31] will retry after 3.741295193s: waiting for machine to come up
	I0818 20:08:50.052617   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053049   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has current primary IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053070   73711 main.go:141] libmachine: (no-preload-944426) Found IP for machine: 192.168.61.228
	I0818 20:08:50.053083   73711 main.go:141] libmachine: (no-preload-944426) Reserving static IP address...
	I0818 20:08:50.053446   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.053467   73711 main.go:141] libmachine: (no-preload-944426) Reserved static IP address: 192.168.61.228
	I0818 20:08:50.053484   73711 main.go:141] libmachine: (no-preload-944426) DBG | skip adding static IP to network mk-no-preload-944426 - found existing host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"}
	I0818 20:08:50.053498   73711 main.go:141] libmachine: (no-preload-944426) DBG | Getting to WaitForSSH function...
	I0818 20:08:50.053510   73711 main.go:141] libmachine: (no-preload-944426) Waiting for SSH to be available...
	I0818 20:08:50.055459   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055790   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.055822   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055911   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH client type: external
	I0818 20:08:50.055939   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa (-rw-------)
	I0818 20:08:50.055971   73711 main.go:141] libmachine: (no-preload-944426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:50.055986   73711 main.go:141] libmachine: (no-preload-944426) DBG | About to run SSH command:
	I0818 20:08:50.055998   73711 main.go:141] libmachine: (no-preload-944426) DBG | exit 0
	I0818 20:08:50.175717   73711 main.go:141] libmachine: (no-preload-944426) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:50.176077   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetConfigRaw
	I0818 20:08:50.176705   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.179072   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179455   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.179486   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179712   73711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:08:50.179900   73711 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:50.179923   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:50.180128   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.182300   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182679   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.182707   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182822   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.183009   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183138   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183292   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.183455   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.183613   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.183623   73711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.739015   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.741282   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:48.165270   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.166500   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:52.667585   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.284037   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:50.284069   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284354   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:08:50.284383   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284503   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.287412   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287774   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.287814   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287965   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.288164   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288352   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288509   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.288669   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.288869   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.288889   73711 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944426 && echo "no-preload-944426" | sudo tee /etc/hostname
	I0818 20:08:50.407844   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944426
	
	I0818 20:08:50.407877   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.410740   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411115   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.411156   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411402   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.411612   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411760   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411869   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.412073   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.412277   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.412299   73711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944426/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:50.521359   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:50.521388   73711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:50.521456   73711 buildroot.go:174] setting up certificates
	I0818 20:08:50.521467   73711 provision.go:84] configureAuth start
	I0818 20:08:50.521481   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.521824   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.524572   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.524975   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.525002   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.525211   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.527350   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527669   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.527697   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527790   73711 provision.go:143] copyHostCerts
	I0818 20:08:50.527856   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:50.527872   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:50.527924   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:50.528038   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:50.528047   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:50.528065   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:50.528119   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:50.528126   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:50.528143   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:50.528192   73711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.no-preload-944426 san=[127.0.0.1 192.168.61.228 localhost minikube no-preload-944426]
	I0818 20:08:50.740892   73711 provision.go:177] copyRemoteCerts
	I0818 20:08:50.740964   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:50.740991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.743676   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744029   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.744059   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744260   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.744494   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.744681   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.744848   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:50.826364   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:50.858459   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:08:50.890910   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:50.918703   73711 provision.go:87] duration metric: took 397.222917ms to configureAuth
	I0818 20:08:50.918730   73711 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:50.918947   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:50.919029   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.922219   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922549   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.922573   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922762   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.922991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923166   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923300   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.923475   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.923683   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.923700   73711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:51.193561   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:51.193588   73711 machine.go:96] duration metric: took 1.013672792s to provisionDockerMachine
	I0818 20:08:51.193603   73711 start.go:293] postStartSetup for "no-preload-944426" (driver="kvm2")
	I0818 20:08:51.193616   73711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:51.193660   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.194032   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:51.194060   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.196422   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196712   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.196747   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196900   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.197046   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.197157   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.197325   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.279007   73711 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:51.283324   73711 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:51.283344   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:51.283424   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:51.283524   73711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:51.283641   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:51.293489   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:51.317415   73711 start.go:296] duration metric: took 123.797891ms for postStartSetup
	I0818 20:08:51.317455   73711 fix.go:56] duration metric: took 20.58515233s for fixHost
	I0818 20:08:51.317479   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.320161   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320452   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.320481   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320667   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.320853   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321027   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321171   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.321322   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:51.321505   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:51.321517   73711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:51.420193   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011731.395088538
	
	I0818 20:08:51.420216   73711 fix.go:216] guest clock: 1724011731.395088538
	I0818 20:08:51.420223   73711 fix.go:229] Guest: 2024-08-18 20:08:51.395088538 +0000 UTC Remote: 2024-08-18 20:08:51.317459873 +0000 UTC m=+356.082724848 (delta=77.628665ms)
	I0818 20:08:51.420240   73711 fix.go:200] guest clock delta is within tolerance: 77.628665ms
	I0818 20:08:51.420256   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 20.687989837s
	I0818 20:08:51.420273   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.420534   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:51.423567   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.423861   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.423888   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.424052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424528   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424690   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424777   73711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:51.424825   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.424916   73711 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:51.424945   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.427482   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427714   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427786   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.427813   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427962   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428080   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.428109   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.428146   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428283   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428342   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428441   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428532   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.428600   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428707   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.528038   73711 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:51.534231   73711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:51.683823   73711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:51.690823   73711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:51.690901   73711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:51.707356   73711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:51.707389   73711 start.go:495] detecting cgroup driver to use...
	I0818 20:08:51.707459   73711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:51.723884   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:51.737661   73711 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:51.737715   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:51.751187   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:51.764367   73711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:51.881664   73711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:52.022183   73711 docker.go:233] disabling docker service ...
	I0818 20:08:52.022250   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:52.037108   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:52.050404   73711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:52.190167   73711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:52.325569   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:52.339546   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:52.358427   73711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:52.358487   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.369570   73711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:52.369629   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.382786   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.396845   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.407797   73711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:52.418649   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.428822   73711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.445799   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.455730   73711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:52.464898   73711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:52.464951   73711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:52.477249   73711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:52.487204   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:52.608922   73711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:52.753849   73711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:52.753918   73711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:52.759116   73711 start.go:563] Will wait 60s for crictl version
	I0818 20:08:52.759175   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:52.763674   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:52.806016   73711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:52.806106   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.833670   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.864310   73711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:52.865447   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:52.868265   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868667   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:52.868699   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868900   73711 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:52.873656   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:52.887328   73711 kubeadm.go:883] updating cluster {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:52.887505   73711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:52.887553   73711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:52.923999   73711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:52.924025   73711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:52.924090   73711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.924097   73711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.924113   73711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.924147   73711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.924216   73711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.924239   73711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:52.924305   73711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.924390   73711 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.925984   73711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.926002   73711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.925994   73711 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 20:08:52.926011   73711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.926053   73711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.926291   73711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.117679   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157566   73711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0818 20:08:53.157608   73711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157655   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.158464   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.161938   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217317   73711 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0818 20:08:53.217374   73711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.217419   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217427   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.229954   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0818 20:08:53.253154   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.253209   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.261450   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.269598   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.270354   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.270401   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.421994   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 20:08:53.422048   73711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0818 20:08:53.422139   73711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.422182   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.422195   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.422052   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.446061   73711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0818 20:08:53.446101   73711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.446100   73711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0818 20:08:53.446114   73711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0818 20:08:53.446158   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446201   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.446161   73711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.446130   73711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.446250   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446280   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.474921   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.474936   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0818 20:08:53.474953   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474995   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474999   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.505782   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.505904   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.505934   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.799739   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.740857   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.167350   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.168652   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.666744   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.666779   74485 pod_ready.go:82] duration metric: took 11.506987195s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.666802   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671280   74485 pod_ready.go:93] pod "kube-proxy-h8bpj" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.671302   74485 pod_ready.go:82] duration metric: took 4.49242ms for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671311   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675745   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.675765   74485 pod_ready.go:82] duration metric: took 4.446707ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675779   74485 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:55.497054   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.022032642s)
	I0818 20:08:55.497090   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0818 20:08:55.497116   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.022155942s)
	I0818 20:08:55.497157   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.022131358s)
	I0818 20:08:55.497168   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0818 20:08:55.497227   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.497273   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.497313   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.991355489s)
	I0818 20:08:55.497274   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.991406662s)
	I0818 20:08:55.497362   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.497369   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.497393   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (1.991466215s)
	I0818 20:08:55.497409   73711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.697646009s)
	I0818 20:08:55.497439   73711 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0818 20:08:55.497455   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:55.497468   73711 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.497504   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:55.590490   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.608567   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.608583   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.608658   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 20:08:55.608707   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.608728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0818 20:08:55.608741   73711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.608756   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:55.608768   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.660747   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 20:08:55.660856   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:08:55.701347   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 20:08:55.701376   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.701433   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:08:55.717056   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 20:08:55.717159   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:08:59.680640   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.071854332s)
	I0818 20:08:59.680673   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0818 20:08:59.680700   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (4.071919945s)
	I0818 20:08:59.680728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0818 20:08:59.680739   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680755   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (4.019877135s)
	I0818 20:08:59.680781   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0818 20:08:59.680792   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.97939667s)
	I0818 20:08:59.680802   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680818   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (3.979373996s)
	I0818 20:08:59.680833   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0818 20:08:59.680847   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:59.680876   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (3.96370085s)
	I0818 20:08:59.680895   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.241463   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:00.241492   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:59.683057   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:02.183113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:01.753708   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.072881673s)
	I0818 20:09:01.753739   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.072859667s)
	I0818 20:09:01.753786   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 20:09:01.753747   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0818 20:09:01.753866   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:01.753870   73711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:01.753922   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:03.515107   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.761161853s)
	I0818 20:09:03.515136   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0818 20:09:03.515142   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.761255334s)
	I0818 20:09:03.515162   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:03.515170   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0818 20:09:03.515223   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.741235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.910002   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.239901   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.682962   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.183678   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:05.463531   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.948279133s)
	I0818 20:09:05.463559   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0818 20:09:05.463585   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:05.463629   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:07.525332   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.061676855s)
	I0818 20:09:07.525365   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0818 20:09:07.525401   73711 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:07.525473   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:08.178855   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 20:09:08.178894   73711 cache_images.go:123] Successfully loaded all cached images
	I0818 20:09:08.178900   73711 cache_images.go:92] duration metric: took 15.254860831s to LoadCachedImages
	I0818 20:09:08.178915   73711 kubeadm.go:934] updating node { 192.168.61.228 8443 v1.31.0 crio true true} ...
	I0818 20:09:08.179070   73711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:09:08.179163   73711 ssh_runner.go:195] Run: crio config
	I0818 20:09:08.229392   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:08.229418   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:08.229429   73711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:09:08.229453   73711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.228 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944426 NodeName:no-preload-944426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:09:08.229598   73711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944426"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:09:08.229657   73711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:09:08.240023   73711 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:09:08.240121   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:09:08.249808   73711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 20:09:08.266663   73711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:09:08.284042   73711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0818 20:09:08.302210   73711 ssh_runner.go:195] Run: grep 192.168.61.228	control-plane.minikube.internal$ /etc/hosts
	I0818 20:09:08.306321   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:09:08.318674   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:08.437701   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:08.462861   73711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426 for IP: 192.168.61.228
	I0818 20:09:08.462889   73711 certs.go:194] generating shared ca certs ...
	I0818 20:09:08.462909   73711 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:08.463099   73711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:09:08.463166   73711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:09:08.463178   73711 certs.go:256] generating profile certs ...
	I0818 20:09:08.463297   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/client.key
	I0818 20:09:08.463400   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key.ec9e396f
	I0818 20:09:08.463459   73711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key
	I0818 20:09:08.463622   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:09:08.463663   73711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:09:08.463676   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:09:08.463718   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:09:08.463748   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:09:08.463780   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:09:08.463827   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:09:08.464500   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:09:08.497860   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:09:08.550536   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:09:08.593972   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:09:08.625691   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 20:09:08.652285   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:09:08.676175   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:09:08.703870   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:09:08.729102   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:09:08.758017   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:09:08.783528   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:09:08.808211   73711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:09:08.825465   73711 ssh_runner.go:195] Run: openssl version
	I0818 20:09:08.831856   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:09:08.843336   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847774   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847824   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.854110   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:09:08.865279   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:09:08.876107   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880723   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880786   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.886526   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:09:08.898139   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:09:08.909258   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.913957   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.914015   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.919888   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:09:08.933118   73711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:09:08.937979   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:09:08.944427   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:09:08.950686   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:09:08.956949   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:09:08.963201   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:09:08.969284   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:09:08.975411   73711 kubeadm.go:392] StartCluster: {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:09:08.975501   73711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:09:08.975543   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.019794   73711 cri.go:89] found id: ""
	I0818 20:09:09.019859   73711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:09:09.030614   73711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:09:09.030635   73711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:09:09.030689   73711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:09:09.041513   73711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:09:09.042532   73711 kubeconfig.go:125] found "no-preload-944426" server: "https://192.168.61.228:8443"
	I0818 20:09:09.044606   73711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:09:09.054823   73711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.228
	I0818 20:09:09.054855   73711 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:09:09.054867   73711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:09:09.054919   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.096324   73711 cri.go:89] found id: ""
	I0818 20:09:09.096412   73711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:09:09.112752   73711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:09:09.122515   73711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:09:09.122537   73711 kubeadm.go:157] found existing configuration files:
	
	I0818 20:09:09.122578   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:09:09.131551   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:09:09.131604   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:09:09.140888   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:09:09.149865   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:09:09.149920   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:09:09.159008   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.168220   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:09:09.168279   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.177638   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:09:09.187508   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:09:09.187567   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:09:09.196657   73711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:09:09.206117   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:09.331465   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.242594   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.738983   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:09.682305   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.683106   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:10.574796   73711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243293266s)
	I0818 20:09:10.574822   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.778850   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.843088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.931752   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:09:10.931846   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.432245   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.932577   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.948423   73711 api_server.go:72] duration metric: took 1.016687944s to wait for apiserver process to appear ...
	I0818 20:09:11.948449   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:09:11.948477   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:11.948946   73711 api_server.go:269] stopped: https://192.168.61.228:8443/healthz: Get "https://192.168.61.228:8443/healthz": dial tcp 192.168.61.228:8443: connect: connection refused
	I0818 20:09:12.448725   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.739963   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.739993   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.740010   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.750388   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.750411   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.948679   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.956174   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:14.956205   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.449273   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.453840   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.453870   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:15.949138   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.958790   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.958813   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:16.449521   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:16.453975   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:09:16.460298   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:09:16.460323   73711 api_server.go:131] duration metric: took 4.511867816s to wait for apiserver health ...
	I0818 20:09:16.460330   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:16.460339   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:16.462141   73711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:09:13.740020   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.238126   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:13.683910   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.182408   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.463457   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:09:16.474867   73711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:09:16.494479   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:09:16.502870   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:09:16.502898   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:09:16.502906   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:09:16.502917   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:09:16.502926   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:09:16.502937   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:09:16.502951   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:09:16.502959   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:09:16.502964   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:09:16.502970   73711 system_pods.go:74] duration metric: took 8.468743ms to wait for pod list to return data ...
	I0818 20:09:16.502977   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:09:16.507863   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:09:16.507884   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:09:16.507893   73711 node_conditions.go:105] duration metric: took 4.912203ms to run NodePressure ...
	I0818 20:09:16.507907   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:16.779765   73711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790746   73711 kubeadm.go:739] kubelet initialised
	I0818 20:09:16.790771   73711 kubeadm.go:740] duration metric: took 10.982299ms waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790780   73711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:16.799544   73711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.806805   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806826   73711 pod_ready.go:82] duration metric: took 7.251632ms for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.806835   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806841   73711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.813614   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813646   73711 pod_ready.go:82] duration metric: took 6.794013ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.813656   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813664   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.818982   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819016   73711 pod_ready.go:82] duration metric: took 5.338981ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.819028   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819037   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.898401   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898433   73711 pod_ready.go:82] duration metric: took 79.37927ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.898446   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898454   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.297663   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297697   73711 pod_ready.go:82] duration metric: took 399.23365ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.297706   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297712   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.697884   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697909   73711 pod_ready.go:82] duration metric: took 400.191092ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.697919   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697925   73711 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:18.099008   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099034   73711 pod_ready.go:82] duration metric: took 401.09908ms for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:18.099044   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099050   73711 pod_ready.go:39] duration metric: took 1.30825923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:18.099071   73711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:09:18.111862   73711 ops.go:34] apiserver oom_adj: -16
	I0818 20:09:18.111888   73711 kubeadm.go:597] duration metric: took 9.081245207s to restartPrimaryControlPlane
	I0818 20:09:18.111901   73711 kubeadm.go:394] duration metric: took 9.136525478s to StartCluster
	I0818 20:09:18.111931   73711 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.112017   73711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:09:18.114460   73711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.114771   73711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:09:18.114885   73711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:09:18.114987   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:09:18.115022   73711 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944426"
	I0818 20:09:18.115036   73711 addons.go:69] Setting default-storageclass=true in profile "no-preload-944426"
	I0818 20:09:18.115059   73711 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944426"
	I0818 20:09:18.115075   73711 addons.go:69] Setting metrics-server=true in profile "no-preload-944426"
	W0818 20:09:18.115082   73711 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:09:18.115095   73711 addons.go:234] Setting addon metrics-server=true in "no-preload-944426"
	I0818 20:09:18.115067   73711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944426"
	W0818 20:09:18.115104   73711 addons.go:243] addon metrics-server should already be in state true
	I0818 20:09:18.115122   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115132   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115517   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115530   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115541   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115553   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115560   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115592   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.117511   73711 out.go:177] * Verifying Kubernetes components...
	I0818 20:09:18.118740   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:18.133596   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0818 20:09:18.134093   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.134661   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.134685   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.135066   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.135263   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.136138   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0818 20:09:18.136520   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.136981   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.137004   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.137353   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.137911   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.137957   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.138952   73711 addons.go:234] Setting addon default-storageclass=true in "no-preload-944426"
	W0818 20:09:18.138975   73711 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:09:18.139001   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.139356   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.139413   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.155618   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0818 20:09:18.156076   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.156666   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.156687   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.157086   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.157669   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.157700   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.158080   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0818 20:09:18.158422   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.158850   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.158868   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.158888   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0818 20:09:18.159237   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.159282   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.159455   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.159741   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.159763   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.160108   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.160582   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.160606   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.165108   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.166977   73711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:09:18.168139   73711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.168156   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:09:18.168174   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.171426   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172004   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.172041   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172082   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.172238   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.172336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.172423   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.175961   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0818 20:09:18.176421   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.176543   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0818 20:09:18.176861   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.176875   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.177065   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.177176   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.177345   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.177745   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.177762   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.178162   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.178336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.179445   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180238   73711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.180253   73711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:09:18.180275   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.181198   73711 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:09:18.182420   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:09:18.182447   73711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:09:18.182464   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.183457   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183499   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.183513   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183656   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.183820   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.183953   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.184112   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.185260   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185575   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.185588   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185754   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.185879   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.186013   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.186099   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.338778   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:18.356229   73711 node_ready.go:35] waiting up to 6m0s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:18.496927   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:09:18.496949   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:09:18.513205   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.540482   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:09:18.540505   73711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:09:18.544078   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.613315   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:18.613340   73711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:09:18.668416   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:19.638171   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094064475s)
	I0818 20:09:19.638274   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638299   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638177   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124933278s)
	I0818 20:09:19.638328   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638343   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638281   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638412   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638697   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638714   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638724   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638732   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638825   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638845   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638853   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638932   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638946   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638966   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638994   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639006   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638893   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.639016   73711 addons.go:475] Verifying addon metrics-server=true in "no-preload-944426"
	I0818 20:09:19.639024   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.639227   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.639401   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639416   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.640889   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.640905   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.640973   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647148   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.647169   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.647416   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.647460   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647448   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.649397   73711 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0818 20:09:19.650643   73711 addons.go:510] duration metric: took 1.535758897s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:18.240003   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.738804   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:18.184836   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.682274   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.360319   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:22.860498   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:22.739478   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.238989   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.239357   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:23.181992   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.182417   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.183071   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.360413   73711 node_ready.go:49] node "no-preload-944426" has status "Ready":"True"
	I0818 20:09:25.360449   73711 node_ready.go:38] duration metric: took 7.004187421s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:25.360462   73711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:25.366498   73711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:27.373766   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.873098   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:29.738977   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.238945   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.683136   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.182825   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:31.873262   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.372828   73711 pod_ready.go:93] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.372849   73711 pod_ready.go:82] duration metric: took 7.006326702s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.372858   73711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376709   73711 pod_ready.go:93] pod "etcd-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.376732   73711 pod_ready.go:82] duration metric: took 3.867173ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376743   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380703   73711 pod_ready.go:93] pod "kube-apiserver-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.380722   73711 pod_ready.go:82] duration metric: took 3.970732ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380733   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385137   73711 pod_ready.go:93] pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.385159   73711 pod_ready.go:82] duration metric: took 4.417483ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385171   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390646   73711 pod_ready.go:93] pod "kube-proxy-2l6g8" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.390702   73711 pod_ready.go:82] duration metric: took 5.522399ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390713   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772352   73711 pod_ready.go:93] pod "kube-scheduler-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.772374   73711 pod_ready.go:82] duration metric: took 381.654122ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772384   73711 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:34.779615   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:34.239095   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.738310   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:34.683291   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:37.181676   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.780369   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.278364   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:38.740139   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.238400   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.181867   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.182275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.781697   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:43.239274   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.739280   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.182744   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.684210   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:46.278261   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.279139   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:47.739502   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.239148   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.181581   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.181939   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.182295   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.779145   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.779195   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.779440   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:52.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:55.239445   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.682549   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.182480   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.278685   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.279456   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.739205   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.739780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:02.240023   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.182638   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.182974   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.778759   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:04.279122   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.240223   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.738829   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:03.183498   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:05.681527   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.682456   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.281289   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:08.778838   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:09.238297   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:11.240258   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.182214   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:12.182485   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.778909   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.279849   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:13.739554   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.240247   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:14.182841   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.682427   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:15.778555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:17.779315   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:18.739993   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.241320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:19.182059   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.681310   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:20.278905   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.779594   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:23.739354   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.739406   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:23.682161   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.683137   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.279240   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:27.778736   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.238417   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.240302   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:28.182371   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.182431   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.182538   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.277705   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.778467   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.739086   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:35.239575   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.682089   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:36.682424   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.277613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:39.778047   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:37.743430   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.240404   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:38.682667   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.182697   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.779091   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.780368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:42.739140   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:44.739349   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.739985   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.681897   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:45.682347   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.682385   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.278368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:48.777847   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:49.238888   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:51.239699   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.182672   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.683492   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.778683   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.778843   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:53.240375   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.738235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.181390   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.181940   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.278430   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.278961   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.778449   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:57.740046   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:00.239797   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.182402   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.182516   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.779677   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.277808   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.740702   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:07.239660   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:03.682270   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.682964   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:06.278085   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.781247   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:09.240113   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.739320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.182148   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:10.681873   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:12.682490   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.278330   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:13.278868   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:14.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:16.739103   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.688049   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.779050   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.779324   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:19.238499   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:21.239793   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.181477   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.181514   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.278360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.779285   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:23.739816   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.243360   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:24.182434   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.182980   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:25.277558   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.278088   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:29.278655   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:28.739673   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.238716   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:28.681620   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:30.682755   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:32.682969   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.778900   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.240237   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.739311   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.182357   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:37.682275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.278357   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:38.279370   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:38.239229   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.240416   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:39.682587   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.187237   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.779225   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.779359   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.779661   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:42.739642   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.240529   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.681898   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:46.683048   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:47.283360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.780428   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:47.739118   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.740073   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.238919   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.182997   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.682384   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.279233   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:54.779470   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.239638   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:56.240123   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:53.682822   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:55.683248   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.279035   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:59.779371   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:58.240182   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.240387   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:58.182085   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.682855   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:02.278506   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:04.279952   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:02.739623   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.239503   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.240563   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:03.182703   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.183099   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.682942   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:06.780051   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:09.283183   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:09.738372   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.738978   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:10.183297   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:12.682617   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.778895   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.779613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:14.239433   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:14.733714   73815 pod_ready.go:82] duration metric: took 4m0.000909376s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:14.733756   73815 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:14.733773   73815 pod_ready.go:39] duration metric: took 4m10.006922238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:14.733798   73815 kubeadm.go:597] duration metric: took 4m18.227938977s to restartPrimaryControlPlane
	W0818 20:12:14.733854   73815 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:14.733884   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:15.182539   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:17.682113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.278810   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:18.279513   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:19.682712   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.182462   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:20.777741   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.779468   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:24.785394   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:24.682001   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.182491   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.278406   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.279272   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.682104   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:32.181795   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:31.779163   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:33.782706   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:34.183088   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.682409   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.278136   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:38.278938   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.943045   73815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.209137834s)
	I0818 20:12:40.943131   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:40.961902   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:40.984956   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:41.000828   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:41.000855   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:41.000908   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:41.019730   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:41.019782   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:41.031694   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:41.052082   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:41.052133   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:41.061682   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.070983   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:41.071036   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.083122   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:41.092977   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:41.093041   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:41.103081   73815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:41.155300   73815 kubeadm.go:310] W0818 20:12:41.112032    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.156131   73815 kubeadm.go:310] W0818 20:12:41.113028    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.270071   73815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:39.183290   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:41.682301   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.777979   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:42.779754   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:44.779992   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:43.683501   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:46.181489   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.616338   73815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:12:49.616432   73815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:12:49.616546   73815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:12:49.616675   73815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:12:49.616784   73815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:12:49.616877   73815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:12:49.618287   73815 out.go:235]   - Generating certificates and keys ...
	I0818 20:12:49.618354   73815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:12:49.618414   73815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:12:49.618486   73815 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:12:49.618537   73815 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:12:49.618598   73815 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:12:49.618648   73815 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:12:49.618700   73815 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:12:49.618779   73815 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:12:49.618892   73815 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:12:49.619007   73815 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:12:49.619065   73815 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:12:49.619163   73815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:12:49.619214   73815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:12:49.619269   73815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:12:49.619331   73815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:12:49.619436   73815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:12:49.619486   73815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:12:49.619556   73815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:12:49.619619   73815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:12:49.621003   73815 out.go:235]   - Booting up control plane ...
	I0818 20:12:49.621109   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:12:49.621195   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:12:49.621272   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:12:49.621380   73815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:12:49.621464   73815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:12:49.621507   73815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:12:49.621621   73815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:12:49.621715   73815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:12:49.621773   73815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.427168ms
	I0818 20:12:49.621843   73815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:12:49.621894   73815 kubeadm.go:310] [api-check] The API server is healthy after 5.00297116s
	I0818 20:12:49.621989   73815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:12:49.622127   73815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:12:49.622192   73815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:12:49.622366   73815 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-291295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:12:49.622416   73815 kubeadm.go:310] [bootstrap-token] Using token: y7e2le.i0q1jk5v0c0u0zuw
	I0818 20:12:49.623896   73815 out.go:235]   - Configuring RBAC rules ...
	I0818 20:12:49.623979   73815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:12:49.624091   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:12:49.624245   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:12:49.624354   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:12:49.624455   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:12:49.624526   73815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:12:49.624621   73815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:12:49.624675   73815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:12:49.624718   73815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:12:49.624724   73815 kubeadm.go:310] 
	I0818 20:12:49.624819   73815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:12:49.624835   73815 kubeadm.go:310] 
	I0818 20:12:49.624933   73815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:12:49.624943   73815 kubeadm.go:310] 
	I0818 20:12:49.624975   73815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:12:49.625066   73815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:12:49.625122   73815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:12:49.625135   73815 kubeadm.go:310] 
	I0818 20:12:49.625210   73815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:12:49.625217   73815 kubeadm.go:310] 
	I0818 20:12:49.625285   73815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:12:49.625295   73815 kubeadm.go:310] 
	I0818 20:12:49.625364   73815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:12:49.625469   73815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:12:49.625552   73815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:12:49.625563   73815 kubeadm.go:310] 
	I0818 20:12:49.625675   73815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:12:49.625756   73815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:12:49.625763   73815 kubeadm.go:310] 
	I0818 20:12:49.625858   73815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.625943   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:12:49.625967   73815 kubeadm.go:310] 	--control-plane 
	I0818 20:12:49.625976   73815 kubeadm.go:310] 
	I0818 20:12:49.626089   73815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:12:49.626099   73815 kubeadm.go:310] 
	I0818 20:12:49.626196   73815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.626293   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:12:49.626302   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:12:49.626308   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:12:49.627714   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:12:47.280266   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.779502   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.628998   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:12:49.639640   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:12:49.657017   73815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-291295 minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=embed-certs-291295 minikube.k8s.io/primary=true
	I0818 20:12:49.685420   73815 ops.go:34] apiserver oom_adj: -16
	I0818 20:12:49.868146   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.368174   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.868256   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.368427   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.868632   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:52.368585   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:48.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:50.681743   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.683179   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.869122   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.368635   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.869162   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.368223   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.490893   73815 kubeadm.go:1113] duration metric: took 4.833865719s to wait for elevateKubeSystemPrivileges
	I0818 20:12:54.490919   73815 kubeadm.go:394] duration metric: took 4m58.032922921s to StartCluster
	I0818 20:12:54.490936   73815 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.491011   73815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:12:54.492769   73815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.493007   73815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:12:54.493069   73815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:12:54.493160   73815 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-291295"
	I0818 20:12:54.493186   73815 addons.go:69] Setting default-storageclass=true in profile "embed-certs-291295"
	I0818 20:12:54.493208   73815 addons.go:69] Setting metrics-server=true in profile "embed-certs-291295"
	I0818 20:12:54.493226   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:12:54.493234   73815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-291295"
	I0818 20:12:54.493250   73815 addons.go:234] Setting addon metrics-server=true in "embed-certs-291295"
	W0818 20:12:54.493263   73815 addons.go:243] addon metrics-server should already be in state true
	I0818 20:12:54.493293   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493197   73815 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-291295"
	W0818 20:12:54.493423   73815 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:12:54.493454   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493667   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493695   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493799   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493824   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493839   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493856   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.494988   73815 out.go:177] * Verifying Kubernetes components...
	I0818 20:12:54.496631   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0818 20:12:54.510362   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0818 20:12:54.510861   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510893   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510904   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.511362   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511394   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511392   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511411   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511512   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511532   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511721   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511770   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511858   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.512040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.512246   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512269   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512275   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.512287   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.515662   73815 addons.go:234] Setting addon default-storageclass=true in "embed-certs-291295"
	W0818 20:12:54.515684   73815 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:12:54.515713   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.516066   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.516113   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.532752   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0818 20:12:54.532798   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0818 20:12:54.533454   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.533570   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.534099   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534122   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534237   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534256   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534374   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534590   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534626   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.534665   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0818 20:12:54.534909   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.535373   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.535793   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.535808   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.536326   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.536411   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.536941   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.538860   73815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:12:54.538862   73815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:12:52.279487   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.279652   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.539061   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.539290   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.540006   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:12:54.540024   73815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:12:54.540043   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.540104   73815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.540119   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:12:54.540144   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.543782   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544017   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544131   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544154   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544293   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544565   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.544734   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.544754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544887   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.545060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.545257   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.545502   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.558292   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0818 20:12:54.558721   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.559184   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.559200   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.559579   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.559764   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.561412   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.562138   73815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.562153   73815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:12:54.562169   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.565078   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565524   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.565543   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565782   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.565954   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.566107   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.566265   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.738286   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:12:54.804581   73815 node_ready.go:35] waiting up to 6m0s for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813953   73815 node_ready.go:49] node "embed-certs-291295" has status "Ready":"True"
	I0818 20:12:54.813984   73815 node_ready.go:38] duration metric: took 9.367719ms for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813995   73815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:54.820670   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:12:54.884787   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:12:54.884808   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:12:54.891500   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.917894   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.939854   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:12:54.939873   73815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:12:55.023663   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:55.023684   73815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:12:55.049846   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:56.106099   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.188173933s)
	I0818 20:12:56.106164   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106173   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106502   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106504   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.106519   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.106529   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106537   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106774   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106788   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107412   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21588373s)
	I0818 20:12:56.107447   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107459   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.107656   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.107729   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.107739   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107747   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.108054   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.108095   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.108105   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.163788   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.163816   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.164087   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.164137   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239269   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189381338s)
	I0818 20:12:56.239327   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239341   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.239712   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.239767   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239748   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.239782   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239792   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.240000   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.240017   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.240028   73815 addons.go:475] Verifying addon metrics-server=true in "embed-certs-291295"
	I0818 20:12:56.241750   73815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:12:56.243157   73815 addons.go:510] duration metric: took 1.750082977s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:12:56.827912   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:55.184449   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:57.676039   74485 pod_ready.go:82] duration metric: took 4m0.000245975s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:57.676064   74485 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:57.676106   74485 pod_ready.go:39] duration metric: took 4m11.533331444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:57.676138   74485 kubeadm.go:597] duration metric: took 4m20.628972956s to restartPrimaryControlPlane
	W0818 20:12:57.676203   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:57.676230   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:56.778171   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:58.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:59.328683   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.331560   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.281134   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.281507   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.828543   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.828572   73815 pod_ready.go:82] duration metric: took 9.007869564s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.828586   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833396   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.833416   73815 pod_ready.go:82] duration metric: took 4.823533ms for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833426   73815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837837   73815 pod_ready.go:93] pod "etcd-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.837856   73815 pod_ready.go:82] duration metric: took 4.422926ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837864   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842646   73815 pod_ready.go:93] pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.842666   73815 pod_ready.go:82] duration metric: took 4.795789ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842675   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846697   73815 pod_ready.go:93] pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.846721   73815 pod_ready.go:82] duration metric: took 4.038999ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846733   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224066   73815 pod_ready.go:93] pod "kube-proxy-8mv85" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.224088   73815 pod_ready.go:82] duration metric: took 377.347897ms for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224097   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624310   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.624337   73815 pod_ready.go:82] duration metric: took 400.233574ms for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624347   73815 pod_ready.go:39] duration metric: took 9.810340936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:04.624363   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:04.624440   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:04.640514   73815 api_server.go:72] duration metric: took 10.147475745s to wait for apiserver process to appear ...
	I0818 20:13:04.640543   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:04.640565   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:13:04.646120   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:13:04.646969   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:04.646989   73815 api_server.go:131] duration metric: took 6.438722ms to wait for apiserver health ...
	I0818 20:13:04.646999   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:04.828347   73815 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:04.828385   73815 system_pods.go:61] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:04.828393   73815 system_pods.go:61] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:04.828398   73815 system_pods.go:61] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:04.828403   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:04.828416   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:04.828420   73815 system_pods.go:61] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:04.828425   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:04.828434   73815 system_pods.go:61] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:04.828441   73815 system_pods.go:61] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:04.828453   73815 system_pods.go:74] duration metric: took 181.44906ms to wait for pod list to return data ...
	I0818 20:13:04.828465   73815 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:05.030945   73815 default_sa.go:45] found service account: "default"
	I0818 20:13:05.030971   73815 default_sa.go:55] duration metric: took 202.497269ms for default service account to be created ...
	I0818 20:13:05.030981   73815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:05.226724   73815 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:05.226760   73815 system_pods.go:89] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:05.226769   73815 system_pods.go:89] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:05.226775   73815 system_pods.go:89] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:05.226781   73815 system_pods.go:89] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:05.226790   73815 system_pods.go:89] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:05.226795   73815 system_pods.go:89] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:05.226801   73815 system_pods.go:89] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:05.226810   73815 system_pods.go:89] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:05.226820   73815 system_pods.go:89] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:05.226831   73815 system_pods.go:126] duration metric: took 195.843628ms to wait for k8s-apps to be running ...
	I0818 20:13:05.226843   73815 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:05.226892   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:05.242656   73815 system_svc.go:56] duration metric: took 15.80684ms WaitForService to wait for kubelet
	I0818 20:13:05.242681   73815 kubeadm.go:582] duration metric: took 10.749648174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:05.242698   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:05.424616   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:05.424642   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:05.424654   73815 node_conditions.go:105] duration metric: took 181.951421ms to run NodePressure ...
	I0818 20:13:05.424668   73815 start.go:241] waiting for startup goroutines ...
	I0818 20:13:05.424678   73815 start.go:246] waiting for cluster config update ...
	I0818 20:13:05.424692   73815 start.go:255] writing updated cluster config ...
	I0818 20:13:05.425003   73815 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:05.470859   73815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:05.472909   73815 out.go:177] * Done! kubectl is now configured to use "embed-certs-291295" cluster and "default" namespace by default
	I0818 20:13:05.779555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:07.783567   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:10.281617   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:12.780570   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:15.282024   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:17.779399   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:23.788389   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.112134895s)
	I0818 20:13:23.788470   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:23.808611   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:13:23.820139   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:13:23.837253   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:13:23.837282   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:13:23.837345   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:13:23.848522   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:13:23.848595   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:13:23.857891   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:13:23.866756   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:13:23.866814   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:13:23.876332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.885435   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:13:23.885535   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.896120   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:13:23.905471   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:13:23.905565   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:13:23.915157   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:13:23.963756   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:13:23.963830   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:13:24.083423   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:13:24.083592   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:13:24.083733   74485 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:13:24.097967   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:13:24.099859   74485 out.go:235]   - Generating certificates and keys ...
	I0818 20:13:24.099926   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:13:24.100020   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:13:24.100125   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:13:24.100212   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:13:24.100310   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:13:24.100389   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:13:24.100476   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:13:24.100592   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:13:24.100711   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:13:24.100829   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:13:24.100891   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:13:24.100978   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:13:24.298737   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:13:24.592511   74485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:13:24.686316   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:13:24.796124   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:13:24.910646   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:13:24.911060   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:13:24.913486   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:13:20.281479   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:22.779269   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:24.914894   74485 out.go:235]   - Booting up control plane ...
	I0818 20:13:24.915018   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:13:24.915106   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:13:24.915303   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:13:24.938289   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:13:24.944304   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:13:24.944367   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:13:25.078685   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:13:25.078813   74485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:13:25.580725   74485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092954ms
	I0818 20:13:25.580847   74485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:13:25.280695   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:27.285875   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:29.779058   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:30.583574   74485 kubeadm.go:310] [api-check] The API server is healthy after 5.001121585s
	I0818 20:13:30.596453   74485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:13:30.616459   74485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:13:30.647753   74485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:13:30.648063   74485 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-852598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:13:30.661702   74485 kubeadm.go:310] [bootstrap-token] Using token: zx02gp.uvda3nvhhfc3i2l5
	I0818 20:13:30.663166   74485 out.go:235]   - Configuring RBAC rules ...
	I0818 20:13:30.663321   74485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:13:30.671440   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:13:30.682462   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:13:30.690376   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:13:30.699091   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:13:30.704304   74485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:13:30.989576   74485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:13:31.435191   74485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:13:31.989155   74485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:13:31.991090   74485 kubeadm.go:310] 
	I0818 20:13:31.991172   74485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:13:31.991188   74485 kubeadm.go:310] 
	I0818 20:13:31.991285   74485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:13:31.991303   74485 kubeadm.go:310] 
	I0818 20:13:31.991337   74485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:13:31.991506   74485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:13:31.991584   74485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:13:31.991605   74485 kubeadm.go:310] 
	I0818 20:13:31.991710   74485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:13:31.991732   74485 kubeadm.go:310] 
	I0818 20:13:31.991802   74485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:13:31.991814   74485 kubeadm.go:310] 
	I0818 20:13:31.991881   74485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:13:31.991986   74485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:13:31.992101   74485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:13:31.992132   74485 kubeadm.go:310] 
	I0818 20:13:31.992250   74485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:13:31.992345   74485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:13:31.992358   74485 kubeadm.go:310] 
	I0818 20:13:31.992464   74485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.992601   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:13:31.992637   74485 kubeadm.go:310] 	--control-plane 
	I0818 20:13:31.992650   74485 kubeadm.go:310] 
	I0818 20:13:31.992760   74485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:13:31.992778   74485 kubeadm.go:310] 
	I0818 20:13:31.992882   74485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.993030   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:13:31.994898   74485 kubeadm.go:310] W0818 20:13:23.918436    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995217   74485 kubeadm.go:310] W0818 20:13:23.919152    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995365   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:13:31.995413   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:13:31.995423   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:13:31.997188   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:13:31.998506   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:13:32.011472   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:13:32.031405   74485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:13:32.031449   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.031494   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-852598 minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=default-k8s-diff-port-852598 minikube.k8s.io/primary=true
	I0818 20:13:32.244997   74485 ops.go:34] apiserver oom_adj: -16
	I0818 20:13:32.245096   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.745775   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.279538   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:32.779152   73711 pod_ready.go:82] duration metric: took 4m0.006755386s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:13:32.779180   73711 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 20:13:32.779190   73711 pod_ready.go:39] duration metric: took 4m7.418715902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:32.779207   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:32.779240   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:32.779298   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:32.848109   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:32.848132   73711 cri.go:89] found id: ""
	I0818 20:13:32.848141   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:32.848201   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.852725   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:32.852789   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:32.899932   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:32.899957   73711 cri.go:89] found id: ""
	I0818 20:13:32.899969   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:32.900028   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.904698   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:32.904771   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:32.945320   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:32.945347   73711 cri.go:89] found id: ""
	I0818 20:13:32.945355   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:32.945411   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.949873   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:32.949935   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:32.986388   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:32.986409   73711 cri.go:89] found id: ""
	I0818 20:13:32.986415   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:32.986465   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.992213   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:32.992292   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:33.035535   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:33.035557   73711 cri.go:89] found id: ""
	I0818 20:13:33.035564   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:33.035622   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.039933   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:33.040006   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:33.077372   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:33.077395   73711 cri.go:89] found id: ""
	I0818 20:13:33.077404   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:33.077468   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.082254   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:33.082327   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:33.120142   73711 cri.go:89] found id: ""
	I0818 20:13:33.120181   73711 logs.go:276] 0 containers: []
	W0818 20:13:33.120192   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:33.120199   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:33.120267   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:33.159065   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:33.159089   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.159095   73711 cri.go:89] found id: ""
	I0818 20:13:33.159104   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:33.159164   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.163366   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.167301   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:33.167327   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:33.207982   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:33.208012   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:33.734525   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:33.734563   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:33.779286   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:33.779334   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:33.915330   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:33.915365   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:33.930057   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:33.930088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:33.978282   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:33.978312   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:34.021464   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:34.021495   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:34.058242   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:34.058271   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:34.094203   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:34.094231   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:34.157812   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:34.157849   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:34.196259   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:34.196288   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:34.273774   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:34.273818   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.245388   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:33.745166   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.245920   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.745548   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.245436   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.745269   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.245383   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.384146   74485 kubeadm.go:1113] duration metric: took 4.352781371s to wait for elevateKubeSystemPrivileges
	I0818 20:13:36.384182   74485 kubeadm.go:394] duration metric: took 4m59.395903283s to StartCluster
	I0818 20:13:36.384199   74485 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.384286   74485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:13:36.385964   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.386201   74485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:13:36.386320   74485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:13:36.386400   74485 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386423   74485 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386440   74485 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386458   74485 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386470   74485 addons.go:243] addon metrics-server should already be in state true
	I0818 20:13:36.386477   74485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-852598"
	I0818 20:13:36.386514   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386434   74485 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386567   74485 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:13:36.386612   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386435   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:13:36.386858   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386887   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386915   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386948   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386982   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.387015   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.387748   74485 out.go:177] * Verifying Kubernetes components...
	I0818 20:13:36.389177   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:13:36.402895   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0818 20:13:36.402928   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0818 20:13:36.403477   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.403479   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404111   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404120   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404519   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404525   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404795   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.405161   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.405192   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.405739   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0818 20:13:36.406246   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.406753   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.406779   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.407167   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.407726   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.407771   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.408687   74485 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.408710   74485 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:13:36.408736   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.409073   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.409120   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.423471   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 20:13:36.423953   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.424569   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.424588   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.424652   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0818 20:13:36.424966   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.425039   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.425257   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.425447   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.425462   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.425911   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.426098   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.427104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.427772   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.428108   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0818 20:13:36.428438   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.428794   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.428816   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.429092   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.429645   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.429696   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.429708   74485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:13:36.429758   74485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:13:36.431859   74485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.431879   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:13:36.431898   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.431958   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:13:36.431969   74485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:13:36.431983   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.435295   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435730   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.435757   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436192   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.436238   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.436254   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.436312   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.436528   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436570   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.436890   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.437171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.437355   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.447762   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0818 20:13:36.448303   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.448694   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.448713   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.449011   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.449160   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.450722   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.450918   74485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.450935   74485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:13:36.450954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.453529   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.453969   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.453992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.454163   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.454862   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.455104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.455246   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.606178   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:13:36.628852   74485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702927   74485 node_ready.go:49] node "default-k8s-diff-port-852598" has status "Ready":"True"
	I0818 20:13:36.702956   74485 node_ready.go:38] duration metric: took 74.077289ms for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702968   74485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:36.713446   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:36.726670   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:13:36.726689   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:13:36.741673   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.784451   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.790772   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:13:36.790798   74485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:13:36.845289   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:36.845315   74485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:13:36.914259   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:37.542511   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542538   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542559   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542543   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542874   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542914   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542922   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542932   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542941   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542953   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542963   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542971   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.543114   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.543123   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545016   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.545041   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545059   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572618   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.572643   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.572953   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572976   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.572989   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.793891   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.793918   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794436   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.794453   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794467   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794479   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.794487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794747   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794762   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794774   74485 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-852598"
	I0818 20:13:37.796423   74485 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:13:36.814874   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:36.838208   73711 api_server.go:72] duration metric: took 4m18.723396382s to wait for apiserver process to appear ...
	I0818 20:13:36.838234   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:36.838276   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:36.838334   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:36.890010   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:36.890036   73711 cri.go:89] found id: ""
	I0818 20:13:36.890046   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:36.890108   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.895675   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:36.895753   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:36.953110   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:36.953162   73711 cri.go:89] found id: ""
	I0818 20:13:36.953172   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:36.953230   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.959359   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:36.959456   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:37.011217   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.011248   73711 cri.go:89] found id: ""
	I0818 20:13:37.011258   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:37.011333   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.016895   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:37.016988   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:37.067705   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:37.067728   73711 cri.go:89] found id: ""
	I0818 20:13:37.067737   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:37.067794   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.073259   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:37.073332   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:37.112192   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.112216   73711 cri.go:89] found id: ""
	I0818 20:13:37.112226   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:37.112285   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.116988   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:37.117060   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:37.153720   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.153744   73711 cri.go:89] found id: ""
	I0818 20:13:37.153753   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:37.153811   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.158160   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:37.158226   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:37.197088   73711 cri.go:89] found id: ""
	I0818 20:13:37.197120   73711 logs.go:276] 0 containers: []
	W0818 20:13:37.197143   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:37.197151   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:37.197215   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:37.241214   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:37.241242   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:37.241248   73711 cri.go:89] found id: ""
	I0818 20:13:37.241257   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:37.241317   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.246159   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.250431   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:37.250460   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:37.313787   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:37.313817   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:37.333235   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:37.333263   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:37.461197   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:37.461236   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.505314   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:37.505343   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.576096   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:37.576121   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:38.083667   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:38.083702   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:38.128922   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:38.128947   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:38.170807   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:38.170842   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:38.265750   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:38.265784   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:38.323224   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:38.323269   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:38.372486   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:38.372530   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:38.413945   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:38.413986   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.798152   74485 addons.go:510] duration metric: took 1.411833485s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:13:38.719805   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.720446   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:40.720472   74485 pod_ready.go:82] duration metric: took 4.00699808s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:40.720482   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:42.728159   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.955186   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:13:40.960201   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:13:40.961240   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:40.961260   73711 api_server.go:131] duration metric: took 4.123017717s to wait for apiserver health ...
	I0818 20:13:40.961273   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:40.961298   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:40.961350   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:41.012093   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.012113   73711 cri.go:89] found id: ""
	I0818 20:13:41.012121   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:41.012172   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.016282   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:41.016337   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:41.063834   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.063861   73711 cri.go:89] found id: ""
	I0818 20:13:41.063871   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:41.063930   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.068645   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:41.068724   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:41.117544   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:41.117565   73711 cri.go:89] found id: ""
	I0818 20:13:41.117573   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:41.117626   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.121916   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:41.121985   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:41.161641   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:41.161660   73711 cri.go:89] found id: ""
	I0818 20:13:41.161667   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:41.161720   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.165727   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:41.165778   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:41.207519   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:41.207544   73711 cri.go:89] found id: ""
	I0818 20:13:41.207554   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:41.207615   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.212114   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:41.212171   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:41.255480   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:41.255501   73711 cri.go:89] found id: ""
	I0818 20:13:41.255508   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:41.255560   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.259585   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:41.259635   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:41.312099   73711 cri.go:89] found id: ""
	I0818 20:13:41.312124   73711 logs.go:276] 0 containers: []
	W0818 20:13:41.312131   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:41.312137   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:41.312201   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:41.358622   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:41.358647   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.358653   73711 cri.go:89] found id: ""
	I0818 20:13:41.358662   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:41.358723   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.363210   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.367271   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:41.367294   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.406329   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:41.406355   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:41.768140   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:41.768175   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:41.811010   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:41.811035   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:41.886206   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:41.886240   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.938249   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:41.938284   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.977289   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:41.977317   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:42.018606   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:42.018630   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:42.055557   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:42.055581   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:42.070467   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:42.070494   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:42.182068   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:42.182100   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:42.219346   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:42.219373   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:42.262193   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:42.262221   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:44.839152   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:13:44.839181   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.839186   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.839191   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.839194   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.839197   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.839200   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.839206   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.839212   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.839218   73711 system_pods.go:74] duration metric: took 3.877940537s to wait for pod list to return data ...
	I0818 20:13:44.839225   73711 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:44.841877   73711 default_sa.go:45] found service account: "default"
	I0818 20:13:44.841896   73711 default_sa.go:55] duration metric: took 2.662355ms for default service account to be created ...
	I0818 20:13:44.841904   73711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:44.846214   73711 system_pods.go:86] 8 kube-system pods found
	I0818 20:13:44.846240   73711 system_pods.go:89] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.846247   73711 system_pods.go:89] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.846252   73711 system_pods.go:89] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.846259   73711 system_pods.go:89] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.846264   73711 system_pods.go:89] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.846269   73711 system_pods.go:89] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.846279   73711 system_pods.go:89] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.846286   73711 system_pods.go:89] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.846296   73711 system_pods.go:126] duration metric: took 4.386348ms to wait for k8s-apps to be running ...
	I0818 20:13:44.846305   73711 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:44.846356   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:44.863225   73711 system_svc.go:56] duration metric: took 16.912117ms WaitForService to wait for kubelet
	I0818 20:13:44.863262   73711 kubeadm.go:582] duration metric: took 4m26.748456958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:44.863287   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:44.866049   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:44.866069   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:44.866082   73711 node_conditions.go:105] duration metric: took 2.789471ms to run NodePressure ...
	I0818 20:13:44.866095   73711 start.go:241] waiting for startup goroutines ...
	I0818 20:13:44.866103   73711 start.go:246] waiting for cluster config update ...
	I0818 20:13:44.866135   73711 start.go:255] writing updated cluster config ...
	I0818 20:13:44.866415   73711 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:44.914902   73711 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:44.916929   73711 out.go:177] * Done! kubectl is now configured to use "no-preload-944426" cluster and "default" namespace by default
	I0818 20:13:45.226521   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:47.226773   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:48.227026   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.227050   74485 pod_ready.go:82] duration metric: took 7.506560684s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.227061   74485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231313   74485 pod_ready.go:93] pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.231336   74485 pod_ready.go:82] duration metric: took 4.268255ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231345   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235228   74485 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.235249   74485 pod_ready.go:82] duration metric: took 3.897729ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235259   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238872   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.238889   74485 pod_ready.go:82] duration metric: took 3.623044ms for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238897   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243264   74485 pod_ready.go:93] pod "kube-proxy-hmvsl" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.243282   74485 pod_ready.go:82] duration metric: took 4.378808ms for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243292   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625076   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.625101   74485 pod_ready.go:82] duration metric: took 381.800619ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625111   74485 pod_ready.go:39] duration metric: took 11.92213071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:48.625128   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:48.625193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:48.640038   74485 api_server.go:72] duration metric: took 12.253809178s to wait for apiserver process to appear ...
	I0818 20:13:48.640061   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:48.640081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:13:48.644433   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:13:48.645289   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:48.645306   74485 api_server.go:131] duration metric: took 5.239358ms to wait for apiserver health ...
	I0818 20:13:48.645313   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:48.829655   74485 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:48.829698   74485 system_pods.go:61] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:48.829706   74485 system_pods.go:61] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:48.829718   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:48.829726   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:48.829731   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:48.829737   74485 system_pods.go:61] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:48.829742   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:48.829754   74485 system_pods.go:61] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:48.829760   74485 system_pods.go:61] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:48.829770   74485 system_pods.go:74] duration metric: took 184.451133ms to wait for pod list to return data ...
	I0818 20:13:48.829783   74485 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:49.023954   74485 default_sa.go:45] found service account: "default"
	I0818 20:13:49.023982   74485 default_sa.go:55] duration metric: took 194.191689ms for default service account to be created ...
	I0818 20:13:49.023992   74485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:49.227864   74485 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:49.227892   74485 system_pods.go:89] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:49.227898   74485 system_pods.go:89] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:49.227902   74485 system_pods.go:89] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:49.227907   74485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:49.227911   74485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:49.227915   74485 system_pods.go:89] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:49.227918   74485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:49.227925   74485 system_pods.go:89] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:49.227930   74485 system_pods.go:89] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:49.227936   74485 system_pods.go:126] duration metric: took 203.939768ms to wait for k8s-apps to be running ...
	I0818 20:13:49.227945   74485 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:49.227989   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:49.242762   74485 system_svc.go:56] duration metric: took 14.808746ms WaitForService to wait for kubelet
	I0818 20:13:49.242793   74485 kubeadm.go:582] duration metric: took 12.856565711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:49.242819   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:49.425517   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:49.425543   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:49.425555   74485 node_conditions.go:105] duration metric: took 182.731125ms to run NodePressure ...
	I0818 20:13:49.425569   74485 start.go:241] waiting for startup goroutines ...
	I0818 20:13:49.425577   74485 start.go:246] waiting for cluster config update ...
	I0818 20:13:49.425588   74485 start.go:255] writing updated cluster config ...
	I0818 20:13:49.425898   74485 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:49.473176   74485 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:49.475285   74485 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-852598" cluster and "default" namespace by default
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 
	
	
	==> CRI-O <==
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.541323867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012571541298036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e725b5d5-9a16-445c-960a-3a29c2c1f509 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.541860505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45db2ee6-fdaf-44ff-8c6c-c4c8dea03b3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.541928469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45db2ee6-fdaf-44ff-8c6c-c4c8dea03b3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.542119434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b,PodSandboxId:1a4f5d80cbd6c92b2845d1a2456b75b776122bb6472479dd5bbca8ad4ad29871,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018578787279,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xp4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c416478-c540-4b55-9faa-95927e58d9a0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e,PodSandboxId:02cf34edaa3ed2dc4db9a41aeab7fd13c2acd71e08a972286cf1853df0114c8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018119976501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmjdr,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: b26f1a75-d466-4634-b9da-9505ca282e30,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107,PodSandboxId:6f3a5c04a09f63cfe2b2c842e8cf2396e56ed988071e0109019271b2e4ab54bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1724012017953392809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be7417-303b-4572-b9c9-1bbd594ed3fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af,PodSandboxId:36f5dc44788ca92ee4635f5d916c7376e95c3215beab5a56e1e3aadc89146279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724012016833861032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmvsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a577a1d-1e69-4bc2-ba50-c4922fcf58ae,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32,PodSandboxId:a35dfb1ab9d6dc4581afa05af6c604756ef3e95f0733df1732cf6e7d6e8b5667,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724012006085485148
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14,PodSandboxId:6a605266487369a0e03d701d5ad594a99d2797442bc193d1b41286c9fd35313d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724012006051516989,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9df5d8589a933b23e3dc29868079397,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9,PodSandboxId:07799b23ec11e6c6095a86de0c8a9b00dfab539013c1366c4cc22b7df3dae5c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724012006026915028,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d44b5831594f5f9237e6d36b37c379,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16,PodSandboxId:49f3c28de996dfb91c7d802bdfb4e8b49c11b2e09b3a643cdc48b4f9e90bfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724012005995541718,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b57415e0431c47f1a80aed8fcedb19e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9,PodSandboxId:4bcb9bab94fb35583d02206eaa17f4d02149703b88eccfb0fc8a1ec9921eb038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011719427092592,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45db2ee6-fdaf-44ff-8c6c-c4c8dea03b3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.580406244Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06741d31-6e90-4232-8a9b-5774bc0d0c33 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.580486299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06741d31-6e90-4232-8a9b-5774bc0d0c33 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.581635913Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e58c02f9-4536-4cae-a2d2-ae913c63c5b5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.582050618Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012571582031963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e58c02f9-4536-4cae-a2d2-ae913c63c5b5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.582599553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efa6a7ea-a9c7-40aa-8ad6-4dfe95b9a8dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.582651716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efa6a7ea-a9c7-40aa-8ad6-4dfe95b9a8dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.582841364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b,PodSandboxId:1a4f5d80cbd6c92b2845d1a2456b75b776122bb6472479dd5bbca8ad4ad29871,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018578787279,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xp4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c416478-c540-4b55-9faa-95927e58d9a0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e,PodSandboxId:02cf34edaa3ed2dc4db9a41aeab7fd13c2acd71e08a972286cf1853df0114c8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018119976501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmjdr,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: b26f1a75-d466-4634-b9da-9505ca282e30,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107,PodSandboxId:6f3a5c04a09f63cfe2b2c842e8cf2396e56ed988071e0109019271b2e4ab54bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1724012017953392809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be7417-303b-4572-b9c9-1bbd594ed3fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af,PodSandboxId:36f5dc44788ca92ee4635f5d916c7376e95c3215beab5a56e1e3aadc89146279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724012016833861032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmvsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a577a1d-1e69-4bc2-ba50-c4922fcf58ae,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32,PodSandboxId:a35dfb1ab9d6dc4581afa05af6c604756ef3e95f0733df1732cf6e7d6e8b5667,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724012006085485148
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14,PodSandboxId:6a605266487369a0e03d701d5ad594a99d2797442bc193d1b41286c9fd35313d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724012006051516989,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9df5d8589a933b23e3dc29868079397,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9,PodSandboxId:07799b23ec11e6c6095a86de0c8a9b00dfab539013c1366c4cc22b7df3dae5c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724012006026915028,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d44b5831594f5f9237e6d36b37c379,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16,PodSandboxId:49f3c28de996dfb91c7d802bdfb4e8b49c11b2e09b3a643cdc48b4f9e90bfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724012005995541718,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b57415e0431c47f1a80aed8fcedb19e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9,PodSandboxId:4bcb9bab94fb35583d02206eaa17f4d02149703b88eccfb0fc8a1ec9921eb038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011719427092592,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efa6a7ea-a9c7-40aa-8ad6-4dfe95b9a8dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.624607326Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc29a428-3ecb-4a37-ad66-be078a40281a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.624683335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc29a428-3ecb-4a37-ad66-be078a40281a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.626181064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=405beaef-2540-417c-ae3d-72ff04f5dfa6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.626699518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012571626678227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=405beaef-2540-417c-ae3d-72ff04f5dfa6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.627423176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37dda93c-6687-4385-83e3-400f0a09cff5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.627484810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37dda93c-6687-4385-83e3-400f0a09cff5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.628026610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b,PodSandboxId:1a4f5d80cbd6c92b2845d1a2456b75b776122bb6472479dd5bbca8ad4ad29871,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018578787279,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xp4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c416478-c540-4b55-9faa-95927e58d9a0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e,PodSandboxId:02cf34edaa3ed2dc4db9a41aeab7fd13c2acd71e08a972286cf1853df0114c8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018119976501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmjdr,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: b26f1a75-d466-4634-b9da-9505ca282e30,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107,PodSandboxId:6f3a5c04a09f63cfe2b2c842e8cf2396e56ed988071e0109019271b2e4ab54bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1724012017953392809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be7417-303b-4572-b9c9-1bbd594ed3fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af,PodSandboxId:36f5dc44788ca92ee4635f5d916c7376e95c3215beab5a56e1e3aadc89146279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724012016833861032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmvsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a577a1d-1e69-4bc2-ba50-c4922fcf58ae,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32,PodSandboxId:a35dfb1ab9d6dc4581afa05af6c604756ef3e95f0733df1732cf6e7d6e8b5667,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724012006085485148
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14,PodSandboxId:6a605266487369a0e03d701d5ad594a99d2797442bc193d1b41286c9fd35313d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724012006051516989,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9df5d8589a933b23e3dc29868079397,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9,PodSandboxId:07799b23ec11e6c6095a86de0c8a9b00dfab539013c1366c4cc22b7df3dae5c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724012006026915028,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d44b5831594f5f9237e6d36b37c379,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16,PodSandboxId:49f3c28de996dfb91c7d802bdfb4e8b49c11b2e09b3a643cdc48b4f9e90bfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724012005995541718,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b57415e0431c47f1a80aed8fcedb19e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9,PodSandboxId:4bcb9bab94fb35583d02206eaa17f4d02149703b88eccfb0fc8a1ec9921eb038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011719427092592,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37dda93c-6687-4385-83e3-400f0a09cff5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.666730981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9507aed7-db8d-403b-b999-b72ea4beb806 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.667003017Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9507aed7-db8d-403b-b999-b72ea4beb806 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.668759289Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=218b82e4-0994-4859-bea9-3a8ede9bb1ea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.669404497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012571669377005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=218b82e4-0994-4859-bea9-3a8ede9bb1ea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.669898809Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=109ba423-49fe-4671-a936-ec554b7f1fa5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.669972226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=109ba423-49fe-4671-a936-ec554b7f1fa5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:22:51 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:22:51.670180971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b,PodSandboxId:1a4f5d80cbd6c92b2845d1a2456b75b776122bb6472479dd5bbca8ad4ad29871,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018578787279,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xp4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c416478-c540-4b55-9faa-95927e58d9a0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e,PodSandboxId:02cf34edaa3ed2dc4db9a41aeab7fd13c2acd71e08a972286cf1853df0114c8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018119976501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmjdr,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: b26f1a75-d466-4634-b9da-9505ca282e30,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107,PodSandboxId:6f3a5c04a09f63cfe2b2c842e8cf2396e56ed988071e0109019271b2e4ab54bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1724012017953392809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be7417-303b-4572-b9c9-1bbd594ed3fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af,PodSandboxId:36f5dc44788ca92ee4635f5d916c7376e95c3215beab5a56e1e3aadc89146279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724012016833861032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmvsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a577a1d-1e69-4bc2-ba50-c4922fcf58ae,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32,PodSandboxId:a35dfb1ab9d6dc4581afa05af6c604756ef3e95f0733df1732cf6e7d6e8b5667,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724012006085485148
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14,PodSandboxId:6a605266487369a0e03d701d5ad594a99d2797442bc193d1b41286c9fd35313d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724012006051516989,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9df5d8589a933b23e3dc29868079397,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9,PodSandboxId:07799b23ec11e6c6095a86de0c8a9b00dfab539013c1366c4cc22b7df3dae5c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724012006026915028,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d44b5831594f5f9237e6d36b37c379,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16,PodSandboxId:49f3c28de996dfb91c7d802bdfb4e8b49c11b2e09b3a643cdc48b4f9e90bfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724012005995541718,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b57415e0431c47f1a80aed8fcedb19e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9,PodSandboxId:4bcb9bab94fb35583d02206eaa17f4d02149703b88eccfb0fc8a1ec9921eb038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011719427092592,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=109ba423-49fe-4671-a936-ec554b7f1fa5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5e4b06ab3798d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1a4f5d80cbd6c       coredns-6f6b679f8f-xp4z4
	15a34037a3e77       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   02cf34edaa3ed       coredns-6f6b679f8f-fmjdr
	c375804891e54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   6f3a5c04a09f6       storage-provisioner
	d33a89ff30c15       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   36f5dc44788ca       kube-proxy-hmvsl
	f00cba1f2a869       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   a35dfb1ab9d6d       kube-apiserver-default-k8s-diff-port-852598
	9cea141aef89f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   6a60526648736       etcd-default-k8s-diff-port-852598
	bd373d02f1c94       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   07799b23ec11e       kube-scheduler-default-k8s-diff-port-852598
	89d240f106d99       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   49f3c28de996d       kube-controller-manager-default-k8s-diff-port-852598
	1bf63663e04d7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   4bcb9bab94fb3       kube-apiserver-default-k8s-diff-port-852598
	
	
	==> coredns [15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-852598
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-852598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=default-k8s-diff-port-852598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 20:13:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-852598
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 20:22:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 20:18:47 +0000   Sun, 18 Aug 2024 20:13:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 20:18:47 +0000   Sun, 18 Aug 2024 20:13:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 20:18:47 +0000   Sun, 18 Aug 2024 20:13:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 20:18:47 +0000   Sun, 18 Aug 2024 20:13:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.111
	  Hostname:    default-k8s-diff-port-852598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a56486080d8241f3b3642b1785624cd5
	  System UUID:                a5648608-0d82-41f3-b364-2b1785624cd5
	  Boot ID:                    b64df251-4eae-4244-b6eb-04579e33de99
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-fmjdr                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 coredns-6f6b679f8f-xp4z4                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 etcd-default-k8s-diff-port-852598                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-852598             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-852598    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m21s
	  kube-system                 kube-proxy-hmvsl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-scheduler-default-k8s-diff-port-852598             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 metrics-server-6867b74b74-gjnsb                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m14s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m14s                  kube-proxy       
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s (x2 over 9m20s)  kubelet          Node default-k8s-diff-port-852598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s (x2 over 9m20s)  kubelet          Node default-k8s-diff-port-852598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s (x2 over 9m20s)  kubelet          Node default-k8s-diff-port-852598 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m16s                  node-controller  Node default-k8s-diff-port-852598 event: Registered Node default-k8s-diff-port-852598 in Controller
	
	
	==> dmesg <==
	[  +0.039776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.006050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.512283] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.614452] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.501029] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.064609] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070091] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.208971] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.119503] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.323998] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.439679] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +0.066350] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.012215] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +4.624845] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.320396] kauditd_printk_skb: 54 callbacks suppressed
	[Aug18 20:09] kauditd_printk_skb: 31 callbacks suppressed
	[Aug18 20:13] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.233019] systemd-fstab-generator[2596]: Ignoring "noauto" option for root device
	[  +4.462481] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.593507] systemd-fstab-generator[2919]: Ignoring "noauto" option for root device
	[  +5.437000] systemd-fstab-generator[3047]: Ignoring "noauto" option for root device
	[  +0.109175] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.427378] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14] <==
	{"level":"info","ts":"2024-08-18T20:13:26.472039Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-18T20:13:26.473707Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"d9925a5c077e2b1a","initial-advertise-peer-urls":["https://192.168.72.111:2380"],"listen-peer-urls":["https://192.168.72.111:2380"],"advertise-client-urls":["https://192.168.72.111:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.111:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-18T20:13:26.473809Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T20:13:26.473941Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.111:2380"}
	{"level":"info","ts":"2024-08-18T20:13:26.474050Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.111:2380"}
	{"level":"info","ts":"2024-08-18T20:13:27.201740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-18T20:13:27.201813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-18T20:13:27.201862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a received MsgPreVoteResp from d9925a5c077e2b1a at term 1"}
	{"level":"info","ts":"2024-08-18T20:13:27.201879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a became candidate at term 2"}
	{"level":"info","ts":"2024-08-18T20:13:27.201904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a received MsgVoteResp from d9925a5c077e2b1a at term 2"}
	{"level":"info","ts":"2024-08-18T20:13:27.201917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a became leader at term 2"}
	{"level":"info","ts":"2024-08-18T20:13:27.201924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9925a5c077e2b1a elected leader d9925a5c077e2b1a at term 2"}
	{"level":"info","ts":"2024-08-18T20:13:27.203370Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d9925a5c077e2b1a","local-member-attributes":"{Name:default-k8s-diff-port-852598 ClientURLs:[https://192.168.72.111:2379]}","request-path":"/0/members/d9925a5c077e2b1a/attributes","cluster-id":"5b15f244ed8f8770","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T20:13:27.203444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:13:27.203532Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:13:27.203975Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T20:13:27.206284Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T20:13:27.206497Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:13:27.207048Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:13:27.209925Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T20:13:27.208367Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:13:27.210895Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.111:2379"}
	{"level":"info","ts":"2024-08-18T20:13:27.208408Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5b15f244ed8f8770","local-member-id":"d9925a5c077e2b1a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:13:27.224532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:13:27.224576Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 20:22:52 up 14 min,  0 users,  load average: 0.14, 0.20, 0.14
	Linux default-k8s-diff-port-852598 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9] <==
	W0818 20:13:19.202878       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.209492       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.331347       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.411149       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.426741       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.468865       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.469212       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.481980       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.500751       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.545810       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.597802       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.612641       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.618161       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.631939       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.655679       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.666148       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.715375       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.765328       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.793969       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.864184       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.941552       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:20.030318       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:20.042823       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:20.174856       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:20.291613       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:18:29.574783       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:18:29.574910       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0818 20:18:29.576120       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:18:29.576204       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:19:29.576819       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:19:29.577134       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:19:29.577054       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:19:29.577283       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0818 20:19:29.578343       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:19:29.578395       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:21:29.579150       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:21:29.579376       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:21:29.579207       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:21:29.579427       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0818 20:21:29.580840       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:21:29.580870       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16] <==
	E0818 20:17:35.568193       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:17:36.029844       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:18:05.574349       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:18:06.037762       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:18:35.582386       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:18:36.046546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:18:47.690623       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-852598"
	E0818 20:19:05.591521       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:19:06.058458       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:19:34.304370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="370.458µs"
	E0818 20:19:35.598636       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:19:36.066141       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:19:48.301337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="233.893µs"
	E0818 20:20:05.606438       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:20:06.074114       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:20:35.613216       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:20:36.083015       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:21:05.621133       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:21:06.091519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:21:35.628624       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:21:36.099450       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:22:05.635019       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:22:06.108827       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:22:35.642431       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:22:36.116655       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 20:13:37.321386       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 20:13:37.331837       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.111"]
	E0818 20:13:37.331908       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 20:13:37.469694       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 20:13:37.473177       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 20:13:37.477486       1 server_linux.go:169] "Using iptables Proxier"
	I0818 20:13:37.500427       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 20:13:37.500725       1 server.go:483] "Version info" version="v1.31.0"
	I0818 20:13:37.500742       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 20:13:37.504650       1 config.go:197] "Starting service config controller"
	I0818 20:13:37.504681       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 20:13:37.504719       1 config.go:104] "Starting endpoint slice config controller"
	I0818 20:13:37.504725       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 20:13:37.506597       1 config.go:326] "Starting node config controller"
	I0818 20:13:37.506649       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 20:13:37.605931       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 20:13:37.605984       1 shared_informer.go:320] Caches are synced for service config
	I0818 20:13:37.606706       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9] <==
	W0818 20:13:28.589851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 20:13:28.590620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.590807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 20:13:28.590907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.590981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 20:13:28.591015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.591048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 20:13:28.591074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.591358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 20:13:28.591507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.592351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 20:13:28.592413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.605411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 20:13:29.605507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.618399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 20:13:29.618519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.706408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 20:13:29.706558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.774008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 20:13:29.774314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.818753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 20:13:29.818869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:30.002168       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 20:13:30.002325       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0818 20:13:33.080628       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 20:21:42 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:21:42.283056    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:21:51 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:21:51.464335    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012511463826817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:51 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:21:51.465795    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012511463826817,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:21:56 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:21:56.283000    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:22:01 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:01.467846    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012521467560934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:01 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:01.468112    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012521467560934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:10 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:10.283440    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:22:11 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:11.469629    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012531469225680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:11 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:11.469694    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012531469225680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:21 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:21.470924    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012541470548917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:21 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:21.470970    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012541470548917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:22 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:22.283191    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:22:31 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:31.313144    2926 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 20:22:31 default-k8s-diff-port-852598 kubelet[2926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 20:22:31 default-k8s-diff-port-852598 kubelet[2926]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 20:22:31 default-k8s-diff-port-852598 kubelet[2926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 20:22:31 default-k8s-diff-port-852598 kubelet[2926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 20:22:31 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:31.473312    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012551472737736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:31 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:31.473342    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012551472737736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:35 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:35.283607    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:22:41 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:41.475221    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012561474754637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:41 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:41.475627    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012561474754637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:49 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:49.288376    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:22:51 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:51.477842    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012571477455216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:22:51 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:22:51.477870    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012571477455216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107] <==
	I0818 20:13:38.188875       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 20:13:38.258212       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 20:13:38.260296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 20:13:38.327992       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 20:13:38.348011       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-852598_f9834ca9-c64d-4ce4-84fb-08d408f4c7f0!
	I0818 20:13:38.334802       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3de4d9f-7c90-4fd3-87cc-c8403f11a438", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-852598_f9834ca9-c64d-4ce4-84fb-08d408f4c7f0 became leader
	I0818 20:13:38.454353       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-852598_f9834ca9-c64d-4ce4-84fb-08d408f4c7f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-852598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gjnsb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-852598 describe pod metrics-server-6867b74b74-gjnsb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-852598 describe pod metrics-server-6867b74b74-gjnsb: exit status 1 (61.274637ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gjnsb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-852598 describe pod metrics-server-6867b74b74-gjnsb: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:16:30.830007   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:16:36.836742   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:16:44.019610   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:16:46.285507   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:17:17.015150   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:17:29.643666   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:17:51.701345   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:18:09.349919   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:18:16.211983   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:18:52.707511   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:19:14.765133   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:19:26.646677   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:19:39.275524   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:20:07.765469   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:20:13.771790   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:20:53.949337   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:21:44.019448   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:21:46.285376   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:22:29.643552   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:22:29.717681   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:22:51.701997   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:23:16.211636   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:24:26.646251   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:25:07.765435   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:25:13.772010   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 2 (227.019324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-247539" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 2 (229.385856ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-247539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-247539 logs -n 25: (1.611321047s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-944426             | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-868662                  | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-868662 --memory=2200 --alsologtostderr   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-291295            | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-868662 image list                           | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:02 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-852598  | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-247539        | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944426                  | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-291295                 | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:03 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-852598       | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-247539             | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:13 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 20:04:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 20:04:42.787579   74485 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:42.787666   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787673   74485 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:42.787677   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787847   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:42.788352   74485 out.go:352] Setting JSON to false
	I0818 20:04:42.789201   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6427,"bootTime":1724005056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:42.789257   74485 start.go:139] virtualization: kvm guest
	I0818 20:04:42.791538   74485 out.go:177] * [default-k8s-diff-port-852598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:42.793185   74485 notify.go:220] Checking for updates...
	I0818 20:04:42.793204   74485 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:42.794555   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:42.795955   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:42.797158   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:42.798459   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:42.799775   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:42.801373   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:04:42.801763   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.801823   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.816564   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0818 20:04:42.816964   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.817465   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.817486   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.817807   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.818015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.818224   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:42.818511   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.818540   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.832964   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0818 20:04:42.833369   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.833866   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.833895   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.834252   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.834438   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.867522   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:42.868931   74485 start.go:297] selected driver: kvm2
	I0818 20:04:42.868948   74485 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.869074   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:42.869754   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.869835   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:42.884983   74485 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:42.885345   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:42.885408   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:04:42.885421   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:42.885450   74485 start.go:340] cluster config:
	{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.885567   74485 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.887511   74485 out.go:177] * Starting "default-k8s-diff-port-852598" primary control-plane node in "default-k8s-diff-port-852598" cluster
	I0818 20:04:42.011628   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:45.083629   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:42.888803   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:04:42.888828   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:42.888834   74485 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:42.888903   74485 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:42.888913   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 20:04:42.888991   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:04:42.889163   74485 start.go:360] acquireMachinesLock for default-k8s-diff-port-852598: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:04:51.163614   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:54.235770   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:00.315808   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:03.387719   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:09.467686   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:12.539667   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:18.619652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:21.691652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:27.771635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:30.843627   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:36.923644   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:39.995678   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:46.075611   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:49.147665   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:55.227683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:58.299638   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:04.379690   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:07.451735   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:13.531669   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:16.603729   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:22.683639   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:25.755659   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:31.835708   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:34.907693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:40.987635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:44.059673   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:50.139693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:53.211683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:59.291707   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:02.363660   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:08.443634   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:11.515633   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:17.595640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:20.667689   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:26.747640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:29.819663   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:32.823816   73815 start.go:364] duration metric: took 4m30.025550701s to acquireMachinesLock for "embed-certs-291295"
	I0818 20:07:32.823869   73815 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:32.823875   73815 fix.go:54] fixHost starting: 
	I0818 20:07:32.824270   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:32.824306   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:32.839755   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0818 20:07:32.840171   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:32.840614   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:07:32.840632   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:32.840962   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:32.841160   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:32.841303   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:07:32.842786   73815 fix.go:112] recreateIfNeeded on embed-certs-291295: state=Stopped err=<nil>
	I0818 20:07:32.842814   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	W0818 20:07:32.842974   73815 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:32.844743   73815 out.go:177] * Restarting existing kvm2 VM for "embed-certs-291295" ...
	I0818 20:07:32.821304   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:32.821364   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821657   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:07:32.821683   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821904   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:07:32.823683   73711 machine.go:96] duration metric: took 4m37.430465042s to provisionDockerMachine
	I0818 20:07:32.823720   73711 fix.go:56] duration metric: took 4m37.451071449s for fixHost
	I0818 20:07:32.823727   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 4m37.451091077s
	W0818 20:07:32.823754   73711 start.go:714] error starting host: provision: host is not running
	W0818 20:07:32.823846   73711 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0818 20:07:32.823855   73711 start.go:729] Will try again in 5 seconds ...
	I0818 20:07:32.846149   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Start
	I0818 20:07:32.846317   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring networks are active...
	I0818 20:07:32.847049   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network default is active
	I0818 20:07:32.847478   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network mk-embed-certs-291295 is active
	I0818 20:07:32.847854   73815 main.go:141] libmachine: (embed-certs-291295) Getting domain xml...
	I0818 20:07:32.848748   73815 main.go:141] libmachine: (embed-certs-291295) Creating domain...
	I0818 20:07:34.053380   73815 main.go:141] libmachine: (embed-certs-291295) Waiting to get IP...
	I0818 20:07:34.054322   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.054765   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.054850   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.054751   75081 retry.go:31] will retry after 299.809444ms: waiting for machine to come up
	I0818 20:07:34.356537   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.356955   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.357014   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.356932   75081 retry.go:31] will retry after 366.714086ms: waiting for machine to come up
	I0818 20:07:34.725440   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.725885   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.725915   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.725839   75081 retry.go:31] will retry after 427.074526ms: waiting for machine to come up
	I0818 20:07:35.154258   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.154660   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.154682   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.154633   75081 retry.go:31] will retry after 565.117984ms: waiting for machine to come up
	I0818 20:07:35.721302   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.721729   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.721757   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.721686   75081 retry.go:31] will retry after 630.987814ms: waiting for machine to come up
	I0818 20:07:36.354566   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:36.354981   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:36.355016   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:36.354951   75081 retry.go:31] will retry after 697.865559ms: waiting for machine to come up
	I0818 20:07:37.054868   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.055232   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.055260   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.055188   75081 retry.go:31] will retry after 898.995052ms: waiting for machine to come up
	I0818 20:07:37.824187   73711 start.go:360] acquireMachinesLock for no-preload-944426: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:37.955672   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.956089   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.956115   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.956038   75081 retry.go:31] will retry after 1.482185836s: waiting for machine to come up
	I0818 20:07:39.440488   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:39.440838   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:39.440889   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:39.440794   75081 retry.go:31] will retry after 1.695604547s: waiting for machine to come up
	I0818 20:07:41.138708   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:41.139203   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:41.139231   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:41.139166   75081 retry.go:31] will retry after 1.806916927s: waiting for machine to come up
	I0818 20:07:42.947942   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:42.948344   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:42.948402   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:42.948319   75081 retry.go:31] will retry after 2.664923271s: waiting for machine to come up
	I0818 20:07:45.616102   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:45.616454   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:45.616482   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:45.616411   75081 retry.go:31] will retry after 3.460207847s: waiting for machine to come up
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:49.078185   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078646   73815 main.go:141] libmachine: (embed-certs-291295) Found IP for machine: 192.168.39.125
	I0818 20:07:49.078676   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has current primary IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078682   73815 main.go:141] libmachine: (embed-certs-291295) Reserving static IP address...
	I0818 20:07:49.079061   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.079091   73815 main.go:141] libmachine: (embed-certs-291295) Reserved static IP address: 192.168.39.125
	I0818 20:07:49.079112   73815 main.go:141] libmachine: (embed-certs-291295) DBG | skip adding static IP to network mk-embed-certs-291295 - found existing host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"}
	I0818 20:07:49.079132   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Getting to WaitForSSH function...
	I0818 20:07:49.079148   73815 main.go:141] libmachine: (embed-certs-291295) Waiting for SSH to be available...
	I0818 20:07:49.081287   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081592   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.081645   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081761   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH client type: external
	I0818 20:07:49.081788   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa (-rw-------)
	I0818 20:07:49.081823   73815 main.go:141] libmachine: (embed-certs-291295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:07:49.081841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | About to run SSH command:
	I0818 20:07:49.081854   73815 main.go:141] libmachine: (embed-certs-291295) DBG | exit 0
	I0818 20:07:49.207649   73815 main.go:141] libmachine: (embed-certs-291295) DBG | SSH cmd err, output: <nil>: 
	I0818 20:07:49.208007   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetConfigRaw
	I0818 20:07:49.208604   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.211088   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211436   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.211464   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211685   73815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:07:49.211906   73815 machine.go:93] provisionDockerMachine start ...
	I0818 20:07:49.211932   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:49.212156   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.214381   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214696   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.214722   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214838   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.215001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215139   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215264   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.215402   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.215637   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.215650   73815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:07:49.327972   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:07:49.328001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328234   73815 buildroot.go:166] provisioning hostname "embed-certs-291295"
	I0818 20:07:49.328286   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328495   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.331272   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331667   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.331695   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331795   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.331967   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332124   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332235   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.332387   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.332602   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.332620   73815 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-291295 && echo "embed-certs-291295" | sudo tee /etc/hostname
	I0818 20:07:49.457656   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-291295
	
	I0818 20:07:49.457692   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.460362   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460692   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.460724   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460821   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.461040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461269   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461419   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.461593   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.461791   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.461807   73815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-291295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-291295/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-291295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:07:49.580418   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:49.580448   73815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:07:49.580487   73815 buildroot.go:174] setting up certificates
	I0818 20:07:49.580501   73815 provision.go:84] configureAuth start
	I0818 20:07:49.580513   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.580787   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.583435   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.583801   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.583825   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.584097   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.586253   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586572   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.586606   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586700   73815 provision.go:143] copyHostCerts
	I0818 20:07:49.586764   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:07:49.586786   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:07:49.586863   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:07:49.586984   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:07:49.586994   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:07:49.587034   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:07:49.587134   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:07:49.587144   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:07:49.587182   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:07:49.587257   73815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-291295 san=[127.0.0.1 192.168.39.125 embed-certs-291295 localhost minikube]
	I0818 20:07:49.844689   73815 provision.go:177] copyRemoteCerts
	I0818 20:07:49.844745   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:07:49.844767   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.847172   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.847517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847700   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.847898   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.848060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.848210   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:49.933798   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:07:49.957958   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:07:49.981551   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:07:50.004238   73815 provision.go:87] duration metric: took 423.726052ms to configureAuth
	I0818 20:07:50.004263   73815 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:07:50.004431   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:07:50.004494   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.006759   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007031   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.007059   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007217   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.007437   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007603   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007729   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.007894   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.008058   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.008072   73815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:07:50.287001   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:07:50.287027   73815 machine.go:96] duration metric: took 1.075103653s to provisionDockerMachine
	I0818 20:07:50.287038   73815 start.go:293] postStartSetup for "embed-certs-291295" (driver="kvm2")
	I0818 20:07:50.287047   73815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:07:50.287067   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.287451   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:07:50.287478   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.290150   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290493   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.290515   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290727   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.290911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.291096   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.291233   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.379621   73815 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:07:50.388749   73815 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:07:50.388772   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:07:50.388844   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:07:50.388927   73815 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:07:50.389046   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:07:50.398957   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:50.422817   73815 start.go:296] duration metric: took 135.767247ms for postStartSetup
	I0818 20:07:50.422859   73815 fix.go:56] duration metric: took 17.598982329s for fixHost
	I0818 20:07:50.422886   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.425514   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.425899   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.425926   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.426113   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.426332   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426505   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426623   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.426798   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.427018   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.427033   73815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:07:50.540087   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011670.500173623
	
	I0818 20:07:50.540113   73815 fix.go:216] guest clock: 1724011670.500173623
	I0818 20:07:50.540122   73815 fix.go:229] Guest: 2024-08-18 20:07:50.500173623 +0000 UTC Remote: 2024-08-18 20:07:50.42286401 +0000 UTC m=+287.764343419 (delta=77.309613ms)
	I0818 20:07:50.540140   73815 fix.go:200] guest clock delta is within tolerance: 77.309613ms
	I0818 20:07:50.540145   73815 start.go:83] releasing machines lock for "embed-certs-291295", held for 17.716293127s
	I0818 20:07:50.540172   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.540462   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:50.543280   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543688   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.543721   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544386   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544639   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544698   73815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:07:50.544749   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.544889   73815 ssh_runner.go:195] Run: cat /version.json
	I0818 20:07:50.544913   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.547481   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547813   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.547841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547867   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547962   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548165   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548281   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.548307   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.548340   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548431   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548515   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.548576   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548701   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548874   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.628660   73815 ssh_runner.go:195] Run: systemctl --version
	I0818 20:07:50.653164   73815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:07:50.799158   73815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:07:50.805063   73815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:07:50.805134   73815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:07:50.820796   73815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:07:50.820822   73815 start.go:495] detecting cgroup driver to use...
	I0818 20:07:50.820901   73815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:07:50.837574   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:07:50.851913   73815 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:07:50.851981   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:07:50.865595   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:07:50.879240   73815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:07:50.990057   73815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:07:51.151540   73815 docker.go:233] disabling docker service ...
	I0818 20:07:51.151618   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:07:51.166231   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:07:51.180949   73815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:07:51.329174   73815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:07:51.460564   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:07:51.474929   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:07:51.494510   73815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:07:51.494573   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.507465   73815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:07:51.507533   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.519207   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.535742   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.551186   73815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:07:51.563233   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.574714   73815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.597948   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.609883   73815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:07:51.621040   73815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:07:51.621115   73815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:07:51.636305   73815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:07:51.646895   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:51.781890   73815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:07:51.927722   73815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:07:51.927799   73815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:07:51.932918   73815 start.go:563] Will wait 60s for crictl version
	I0818 20:07:51.933006   73815 ssh_runner.go:195] Run: which crictl
	I0818 20:07:51.936917   73815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:07:51.981063   73815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:07:51.981141   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.008566   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.041182   73815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:07:52.042348   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:52.045196   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045559   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:52.045588   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045764   73815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 20:07:52.050188   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:52.065105   73815 kubeadm.go:883] updating cluster {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:07:52.065244   73815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:07:52.065300   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:52.108608   73815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:07:52.108687   73815 ssh_runner.go:195] Run: which lz4
	I0818 20:07:52.112897   73815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:07:52.117388   73815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:07:52.117421   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:53.532527   73815 crio.go:462] duration metric: took 1.419668609s to copy over tarball
	I0818 20:07:53.532605   73815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:07:55.664780   73815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132141788s)
	I0818 20:07:55.664810   73815 crio.go:469] duration metric: took 2.132257968s to extract the tarball
	I0818 20:07:55.664820   73815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:07:55.702662   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:55.745782   73815 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:07:55.745801   73815 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:07:55.745809   73815 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0818 20:07:55.745921   73815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-291295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:07:55.745985   73815 ssh_runner.go:195] Run: crio config
	I0818 20:07:55.788458   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:07:55.788484   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:07:55.788503   73815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:07:55.788537   73815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-291295 NodeName:embed-certs-291295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:07:55.788723   73815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-291295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:07:55.788800   73815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:07:55.798787   73815 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:07:55.798860   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:07:55.808532   73815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0818 20:07:55.825731   73815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:07:55.842287   73815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0818 20:07:55.860058   73815 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0818 20:07:55.864007   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:55.876297   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:55.999076   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:07:56.015305   73815 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295 for IP: 192.168.39.125
	I0818 20:07:56.015325   73815 certs.go:194] generating shared ca certs ...
	I0818 20:07:56.015339   73815 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:07:56.015505   73815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:07:56.015548   73815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:07:56.015557   73815 certs.go:256] generating profile certs ...
	I0818 20:07:56.015633   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/client.key
	I0818 20:07:56.015689   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key.a8bddcfe
	I0818 20:07:56.015732   73815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key
	I0818 20:07:56.015846   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:07:56.015885   73815 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:07:56.015898   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:07:56.015953   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:07:56.015979   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:07:56.015999   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:07:56.016036   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:56.016660   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:07:56.044323   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:07:56.079231   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:07:56.111738   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:07:56.134817   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0818 20:07:56.160819   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:07:56.185806   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:07:56.210116   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:07:56.234185   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:07:56.256896   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:07:56.279505   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:07:56.302178   73815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:07:56.318931   73815 ssh_runner.go:195] Run: openssl version
	I0818 20:07:56.324865   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:07:56.336272   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340825   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340872   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.346515   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:07:56.357471   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:07:56.368211   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372600   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372662   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.378152   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:07:56.388868   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:07:56.399297   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403628   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403663   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.409041   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:07:56.419342   73815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:07:56.423757   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:07:56.429341   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:07:56.435012   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:07:56.440752   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:07:56.446305   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:07:56.452219   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:07:56.458004   73815 kubeadm.go:392] StartCluster: {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:07:56.458133   73815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:07:56.458181   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.495200   73815 cri.go:89] found id: ""
	I0818 20:07:56.495281   73815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:07:56.505834   73815 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:07:56.505854   73815 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:07:56.505903   73815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:07:56.516025   73815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:07:56.516962   73815 kubeconfig.go:125] found "embed-certs-291295" server: "https://192.168.39.125:8443"
	I0818 20:07:56.518789   73815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:07:56.528513   73815 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0818 20:07:56.528541   73815 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:07:56.528556   73815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:07:56.528612   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.568091   73815 cri.go:89] found id: ""
	I0818 20:07:56.568161   73815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:07:56.584012   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:07:56.593697   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:07:56.593712   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:07:56.593746   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:07:56.603071   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:07:56.603112   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:07:56.612422   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:07:56.621194   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:07:56.621243   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:07:56.630252   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.640086   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:07:56.640138   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.649323   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:07:56.658055   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:07:56.658110   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:07:56.667134   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:07:56.676460   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.783806   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.515850   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:07:57.710065   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.780213   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.854365   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:07:57.854458   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.355246   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.854602   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.355211   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.854991   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.354593   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.368818   73815 api_server.go:72] duration metric: took 2.514473789s to wait for apiserver process to appear ...
	I0818 20:08:00.368844   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:00.368866   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.832413   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:02.832449   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:02.832466   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.924768   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.924804   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:02.924820   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.929839   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.929869   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.369350   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.373766   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.373796   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.869333   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.874889   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.874919   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:04.369187   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:04.374739   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:08:04.383736   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:04.383764   73815 api_server.go:131] duration metric: took 4.014913233s to wait for apiserver health ...
	I0818 20:08:04.383773   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:08:04.383779   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:04.385486   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:04.386800   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:04.403476   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:04.422354   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:04.435181   73815 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:04.435222   73815 system_pods.go:61] "coredns-6f6b679f8f-wvd9k" [02369649-1565-437d-8b19-a67adfe13d45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:04.435237   73815 system_pods.go:61] "etcd-embed-certs-291295" [1e9f0b7d-bb65-4867-821e-b9af34338b3e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:04.435246   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [bb884a00-e058-4348-bc6a-427c64f4c68d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:04.435261   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [3a359998-cdb6-46ef-a018-e03e70cb33e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:04.435269   73815 system_pods.go:61] "kube-proxy-5fjm2" [bb15b1d9-8221-473a-b0c7-8c65b3b18bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 20:08:04.435276   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [4ed7725a-b0e6-4bc0-b0bd-913eb15fd4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:04.435287   73815 system_pods.go:61] "metrics-server-6867b74b74-g2kt7" [c23cc238-51f0-402c-a0c1-4aecc020d845] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:04.435294   73815 system_pods.go:61] "storage-provisioner" [2dcad3a1-15f0-41b9-8398-5a6e2d8763b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 20:08:04.435303   73815 system_pods.go:74] duration metric: took 12.928394ms to wait for pod list to return data ...
	I0818 20:08:04.435314   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:04.439127   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:04.439150   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:04.439161   73815 node_conditions.go:105] duration metric: took 3.84281ms to run NodePressure ...
	I0818 20:08:04.439176   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:04.720705   73815 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726814   73815 kubeadm.go:739] kubelet initialised
	I0818 20:08:04.726835   73815 kubeadm.go:740] duration metric: took 6.104356ms waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726843   73815 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:04.736000   73815 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.741473   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741509   73815 pod_ready.go:82] duration metric: took 5.472852ms for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.741523   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741534   73815 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.749841   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749872   73815 pod_ready.go:82] duration metric: took 8.326743ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.749883   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749891   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.756947   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.756997   73815 pod_ready.go:82] duration metric: took 7.079861ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.757011   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.757019   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.825829   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825865   73815 pod_ready.go:82] duration metric: took 68.834734ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.825878   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825888   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225761   73815 pod_ready.go:93] pod "kube-proxy-5fjm2" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:05.225786   73815 pod_ready.go:82] duration metric: took 399.888138ms for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225796   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:07.232250   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.744305   74485 start.go:364] duration metric: took 3m27.85511004s to acquireMachinesLock for "default-k8s-diff-port-852598"
	I0818 20:08:10.744365   74485 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:10.744384   74485 fix.go:54] fixHost starting: 
	I0818 20:08:10.744751   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:10.744791   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:10.764317   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0818 20:08:10.764799   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:10.765323   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:08:10.765349   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:10.765723   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:10.765929   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:10.766110   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:08:10.767735   74485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-852598: state=Stopped err=<nil>
	I0818 20:08:10.767763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	W0818 20:08:10.767931   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:10.770197   74485 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-852598" ...
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:09.234086   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:11.732954   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.771461   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Start
	I0818 20:08:10.771638   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring networks are active...
	I0818 20:08:10.772332   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network default is active
	I0818 20:08:10.772645   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network mk-default-k8s-diff-port-852598 is active
	I0818 20:08:10.773119   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Getting domain xml...
	I0818 20:08:10.773840   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Creating domain...
	I0818 20:08:12.058765   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting to get IP...
	I0818 20:08:12.059745   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060236   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.060152   75353 retry.go:31] will retry after 227.793826ms: waiting for machine to come up
	I0818 20:08:12.289622   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290038   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290061   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.290013   75353 retry.go:31] will retry after 288.501286ms: waiting for machine to come up
	I0818 20:08:12.580672   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581158   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.581120   75353 retry.go:31] will retry after 460.489481ms: waiting for machine to come up
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:13.736819   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:14.732761   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:14.732783   73815 pod_ready.go:82] duration metric: took 9.506980075s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:14.732792   73815 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:16.739855   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:13.042839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043444   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043475   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.043413   75353 retry.go:31] will retry after 542.076458ms: waiting for machine to come up
	I0818 20:08:13.586675   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587296   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587326   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.587216   75353 retry.go:31] will retry after 553.588704ms: waiting for machine to come up
	I0818 20:08:14.142076   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142714   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142737   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.142616   75353 retry.go:31] will retry after 852.179264ms: waiting for machine to come up
	I0818 20:08:14.996732   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997226   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997258   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.997175   75353 retry.go:31] will retry after 732.180291ms: waiting for machine to come up
	I0818 20:08:15.731247   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731741   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731771   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:15.731699   75353 retry.go:31] will retry after 1.456328641s: waiting for machine to come up
	I0818 20:08:17.189586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190071   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:17.189997   75353 retry.go:31] will retry after 1.632315907s: waiting for machine to come up
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:18.740885   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:21.239526   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:18.824458   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824827   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824859   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:18.824772   75353 retry.go:31] will retry after 2.077122736s: waiting for machine to come up
	I0818 20:08:20.903734   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904176   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904203   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:20.904139   75353 retry.go:31] will retry after 1.975638775s: waiting for machine to come up
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.239765   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:25.739127   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:22.882020   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882511   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:22.882450   75353 retry.go:31] will retry after 3.362090127s: waiting for machine to come up
	I0818 20:08:26.246148   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246523   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246547   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:26.246479   75353 retry.go:31] will retry after 3.188423251s: waiting for machine to come up
	I0818 20:08:30.732227   73711 start.go:364] duration metric: took 52.90798246s to acquireMachinesLock for "no-preload-944426"
	I0818 20:08:30.732291   73711 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:30.732302   73711 fix.go:54] fixHost starting: 
	I0818 20:08:30.732702   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:30.732738   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:30.749873   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0818 20:08:30.750371   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:30.750922   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:08:30.750951   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:30.751323   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:30.751547   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:30.751748   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:08:30.753437   73711 fix.go:112] recreateIfNeeded on no-preload-944426: state=Stopped err=<nil>
	I0818 20:08:30.753460   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	W0818 20:08:30.753623   73711 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:30.756026   73711 out.go:177] * Restarting existing kvm2 VM for "no-preload-944426" ...
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.438706   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439209   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Found IP for machine: 192.168.72.111
	I0818 20:08:29.439225   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserving static IP address...
	I0818 20:08:29.439241   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has current primary IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439712   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.439740   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | skip adding static IP to network mk-default-k8s-diff-port-852598 - found existing host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"}
	I0818 20:08:29.439754   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserved static IP address: 192.168.72.111
	I0818 20:08:29.439769   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for SSH to be available...
	I0818 20:08:29.439786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Getting to WaitForSSH function...
	I0818 20:08:29.442039   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442351   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.442378   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442515   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH client type: external
	I0818 20:08:29.442545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa (-rw-------)
	I0818 20:08:29.442569   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:29.442580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | About to run SSH command:
	I0818 20:08:29.442592   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | exit 0
	I0818 20:08:29.567586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:29.567935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetConfigRaw
	I0818 20:08:29.568553   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.570763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571150   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.571183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571367   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:08:29.571585   74485 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:29.571608   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:29.571839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.574102   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574560   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.574598   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574753   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.574920   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575219   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.575421   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.575610   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.575623   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:29.683677   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:29.683705   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.683980   74485 buildroot.go:166] provisioning hostname "default-k8s-diff-port-852598"
	I0818 20:08:29.684010   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.684210   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.687062   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687490   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.687518   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687656   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.687817   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.687954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.688105   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.688270   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.688444   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.688457   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-852598 && echo "default-k8s-diff-port-852598" | sudo tee /etc/hostname
	I0818 20:08:29.810790   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-852598
	
	I0818 20:08:29.810821   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.813448   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.813868   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.814159   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814322   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814457   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.814613   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.814821   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.814847   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-852598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-852598/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-852598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:29.934730   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:29.934762   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:29.934818   74485 buildroot.go:174] setting up certificates
	I0818 20:08:29.934834   74485 provision.go:84] configureAuth start
	I0818 20:08:29.934848   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.935133   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.938004   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938365   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.938385   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938612   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.940910   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941267   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.941298   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941376   74485 provision.go:143] copyHostCerts
	I0818 20:08:29.941429   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:29.941446   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:29.941498   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:29.941583   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:29.941591   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:29.941609   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:29.941657   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:29.941664   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:29.941683   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:29.941726   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-852598 san=[127.0.0.1 192.168.72.111 default-k8s-diff-port-852598 localhost minikube]
	I0818 20:08:30.047223   74485 provision.go:177] copyRemoteCerts
	I0818 20:08:30.047284   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:30.047310   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.049891   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050165   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.050195   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050394   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.050580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.050750   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.050910   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.133873   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:30.158887   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0818 20:08:30.183930   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 20:08:30.208851   74485 provision.go:87] duration metric: took 274.002401ms to configureAuth
	I0818 20:08:30.208888   74485 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:30.209075   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:30.209144   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.211913   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212274   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.212305   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212521   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.212718   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.212897   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.213060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.213313   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.213531   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.213564   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:30.490496   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:30.490524   74485 machine.go:96] duration metric: took 918.924484ms to provisionDockerMachine
	I0818 20:08:30.490541   74485 start.go:293] postStartSetup for "default-k8s-diff-port-852598" (driver="kvm2")
	I0818 20:08:30.490555   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:30.490576   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.490879   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:30.490904   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.493538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.493863   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.493894   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.494015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.494211   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.494367   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.494513   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.582020   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:30.586488   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:30.586510   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:30.586568   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:30.586656   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:30.586743   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:30.595907   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:30.619808   74485 start.go:296] duration metric: took 129.254668ms for postStartSetup
	I0818 20:08:30.619842   74485 fix.go:56] duration metric: took 19.875457987s for fixHost
	I0818 20:08:30.619861   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.622487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622802   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.622836   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.623181   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623338   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623489   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.623663   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.623819   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.623829   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:30.732011   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011710.692571104
	
	I0818 20:08:30.732033   74485 fix.go:216] guest clock: 1724011710.692571104
	I0818 20:08:30.732040   74485 fix.go:229] Guest: 2024-08-18 20:08:30.692571104 +0000 UTC Remote: 2024-08-18 20:08:30.619845545 +0000 UTC m=+227.865652589 (delta=72.725559ms)
	I0818 20:08:30.732088   74485 fix.go:200] guest clock delta is within tolerance: 72.725559ms
	I0818 20:08:30.732098   74485 start.go:83] releasing machines lock for "default-k8s-diff-port-852598", held for 19.987759602s
	I0818 20:08:30.732126   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.732380   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:30.735249   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735696   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.735724   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735987   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736665   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736886   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736961   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:30.737002   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.737212   74485 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:30.737240   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.740016   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740246   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740447   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740470   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740646   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740650   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740739   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740949   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740956   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741415   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741427   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741608   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.741699   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.821128   74485 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:30.848919   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:30.997885   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:31.004578   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:31.004656   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:31.023770   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:31.023801   74485 start.go:495] detecting cgroup driver to use...
	I0818 20:08:31.023873   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:31.040507   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:31.054848   74485 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:31.054901   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:31.069584   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:31.089532   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:31.214560   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:31.394507   74485 docker.go:233] disabling docker service ...
	I0818 20:08:31.394571   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:31.411295   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:31.427312   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:31.547148   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:31.669942   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:31.686214   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:31.711412   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:31.711474   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.723281   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:31.723346   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.735488   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.748029   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.762456   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:31.779045   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.793816   74485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.816892   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.829236   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:31.842943   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:31.843000   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:31.858422   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:31.870179   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:32.003783   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:32.160300   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:32.160368   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:32.165424   74485 start.go:563] Will wait 60s for crictl version
	I0818 20:08:32.165472   74485 ssh_runner.go:195] Run: which crictl
	I0818 20:08:32.169268   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:32.211667   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:32.211758   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.242366   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.272343   74485 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:27.739698   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:30.239242   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.240089   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.273652   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:32.277017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277362   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:32.277395   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277654   74485 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:32.282225   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:32.306870   74485 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:32.306980   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:32.307040   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:32.350393   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:32.350473   74485 ssh_runner.go:195] Run: which lz4
	I0818 20:08:32.355129   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:32.359816   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:32.359839   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:08:30.757329   73711 main.go:141] libmachine: (no-preload-944426) Calling .Start
	I0818 20:08:30.757514   73711 main.go:141] libmachine: (no-preload-944426) Ensuring networks are active...
	I0818 20:08:30.758286   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network default is active
	I0818 20:08:30.758667   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network mk-no-preload-944426 is active
	I0818 20:08:30.759084   73711 main.go:141] libmachine: (no-preload-944426) Getting domain xml...
	I0818 20:08:30.759889   73711 main.go:141] libmachine: (no-preload-944426) Creating domain...
	I0818 20:08:32.064235   73711 main.go:141] libmachine: (no-preload-944426) Waiting to get IP...
	I0818 20:08:32.065149   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.065617   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.065693   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.065614   75550 retry.go:31] will retry after 223.046315ms: waiting for machine to come up
	I0818 20:08:32.290000   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.290486   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.290517   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.290460   75550 retry.go:31] will retry after 359.595476ms: waiting for machine to come up
	I0818 20:08:32.652293   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.652922   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.652953   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.652891   75550 retry.go:31] will retry after 355.131428ms: waiting for machine to come up
	I0818 20:08:33.009174   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.009664   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.009692   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.009620   75550 retry.go:31] will retry after 433.765107ms: waiting for machine to come up
	I0818 20:08:33.445297   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.446028   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.446057   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.446005   75550 retry.go:31] will retry after 547.853366ms: waiting for machine to come up
	I0818 20:08:33.995808   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.996537   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.996569   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.996500   75550 retry.go:31] will retry after 830.882652ms: waiting for machine to come up
	I0818 20:08:34.828636   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:34.829139   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:34.829169   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:34.829088   75550 retry.go:31] will retry after 1.034176215s: waiting for machine to come up
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.240826   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:36.740440   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:33.831827   74485 crio.go:462] duration metric: took 1.476738272s to copy over tarball
	I0818 20:08:33.831892   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:36.080107   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.24818669s)
	I0818 20:08:36.080141   74485 crio.go:469] duration metric: took 2.248285769s to extract the tarball
	I0818 20:08:36.080159   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:36.120912   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:36.170431   74485 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:08:36.170455   74485 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:08:36.170463   74485 kubeadm.go:934] updating node { 192.168.72.111 8444 v1.31.0 crio true true} ...
	I0818 20:08:36.170563   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-852598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:36.170628   74485 ssh_runner.go:195] Run: crio config
	I0818 20:08:36.215464   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:36.215491   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:36.215504   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:36.215528   74485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-852598 NodeName:default-k8s-diff-port-852598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:08:36.215652   74485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-852598"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:36.215718   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:08:36.227163   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:36.227254   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:36.237577   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0818 20:08:36.254898   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:36.273530   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0818 20:08:36.290824   74485 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:36.294542   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:36.306822   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:36.443673   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:36.461205   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598 for IP: 192.168.72.111
	I0818 20:08:36.461232   74485 certs.go:194] generating shared ca certs ...
	I0818 20:08:36.461252   74485 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:36.461420   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:36.461492   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:36.461505   74485 certs.go:256] generating profile certs ...
	I0818 20:08:36.461621   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/client.key
	I0818 20:08:36.461717   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key.44a0f5ad
	I0818 20:08:36.461783   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key
	I0818 20:08:36.461930   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:36.461983   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:36.461998   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:36.462026   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:36.462077   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:36.462112   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:36.462167   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:36.462916   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:36.512610   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:36.558616   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:36.595755   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:36.638264   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 20:08:36.669336   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:08:36.692480   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:36.717235   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:08:36.742220   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:36.765505   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:36.789279   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:36.813777   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:36.831256   74485 ssh_runner.go:195] Run: openssl version
	I0818 20:08:36.837184   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:36.848123   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853030   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853089   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.859016   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:36.871084   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:36.882581   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.888943   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.889008   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.896841   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:36.911762   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:36.923029   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.927982   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.928039   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.934165   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:36.946794   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:36.951686   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:36.957905   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:36.964071   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:36.970369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:36.976369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:36.982386   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:36.988286   74485 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:36.988382   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:36.988433   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.036383   74485 cri.go:89] found id: ""
	I0818 20:08:37.036472   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:37.047135   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:37.047159   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:37.047204   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:37.058133   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:37.059236   74485 kubeconfig.go:125] found "default-k8s-diff-port-852598" server: "https://192.168.72.111:8444"
	I0818 20:08:37.061368   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:37.072922   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0818 20:08:37.072961   74485 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:37.072975   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:37.073035   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.120622   74485 cri.go:89] found id: ""
	I0818 20:08:37.120713   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:37.138564   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:37.149091   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:37.149114   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:37.149167   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:08:37.160298   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:37.160364   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:37.170717   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:08:37.180261   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:37.180337   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:37.190466   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.200331   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:37.200407   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.210729   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:08:37.220302   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:37.220379   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:37.230616   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:37.241303   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:37.365964   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:35.865644   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:35.866148   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:35.866176   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:35.866094   75550 retry.go:31] will retry after 1.30047863s: waiting for machine to come up
	I0818 20:08:37.168446   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:37.168947   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:37.168985   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:37.168886   75550 retry.go:31] will retry after 1.143148547s: waiting for machine to come up
	I0818 20:08:38.314142   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:38.314622   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:38.314645   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:38.314568   75550 retry.go:31] will retry after 2.106630797s: waiting for machine to come up
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.240817   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:41.741780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:38.322305   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.523945   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.627637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.794218   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:38.794298   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.295075   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.795095   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.810749   74485 api_server.go:72] duration metric: took 1.016560665s to wait for apiserver process to appear ...
	I0818 20:08:39.810778   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:39.810802   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:39.811324   74485 api_server.go:269] stopped: https://192.168.72.111:8444/healthz: Get "https://192.168.72.111:8444/healthz": dial tcp 192.168.72.111:8444: connect: connection refused
	I0818 20:08:40.311081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.309160   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:42.309190   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:42.309206   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.364083   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.364123   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:42.364148   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.370890   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.370918   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:40.423364   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:40.423886   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:40.423909   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:40.423851   75550 retry.go:31] will retry after 2.350918177s: waiting for machine to come up
	I0818 20:08:42.776801   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:42.777407   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:42.777440   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:42.777361   75550 retry.go:31] will retry after 3.529824243s: waiting for machine to come up
	I0818 20:08:42.815322   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.823702   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.823738   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.311540   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.317503   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.317537   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.810955   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.816976   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.817005   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.311718   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.316009   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.316038   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.811634   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.816069   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.816095   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.311732   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.317099   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:45.317122   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.811063   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.815319   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:08:45.821699   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:45.821728   74485 api_server.go:131] duration metric: took 6.010942001s to wait for apiserver health ...
	I0818 20:08:45.821739   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:45.821774   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:45.823968   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.239827   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:46.240539   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:45.825235   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:45.836398   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:45.854746   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:45.866305   74485 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:45.866335   74485 system_pods.go:61] "coredns-6f6b679f8f-zfdn9" [8ed412a0-912d-4619-a2d8-2378f921037b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:45.866344   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [efa18356-f8dd-4fe4-acc6-59f859e7becf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:45.866351   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [b92f2056-c5b6-4a2f-8519-a83b2350866f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:45.866359   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [7eb6a474-891d-442e-bd85-4ca766312f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:45.866365   74485 system_pods.go:61] "kube-proxy-h8bpj" [472e231d-df71-44d6-8873-23d7e43d43d2] Running
	I0818 20:08:45.866375   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [43dccb14-0125-4d48-9537-8a87c865b586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:45.866381   74485 system_pods.go:61] "metrics-server-6867b74b74-brqj6" [de1c0894-2b42-4728-bf63-bea36c5aa0d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:45.866387   74485 system_pods.go:61] "storage-provisioner" [41499d9e-d3cf-4dbc-9464-998a1f2c6186] Running
	I0818 20:08:45.866395   74485 system_pods.go:74] duration metric: took 11.62616ms to wait for pod list to return data ...
	I0818 20:08:45.866411   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:45.870540   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:45.870564   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:45.870578   74485 node_conditions.go:105] duration metric: took 4.15805ms to run NodePressure ...
	I0818 20:08:45.870597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:46.138555   74485 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142738   74485 kubeadm.go:739] kubelet initialised
	I0818 20:08:46.142758   74485 kubeadm.go:740] duration metric: took 4.173219ms waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142765   74485 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:46.147199   74485 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.151726   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151751   74485 pod_ready.go:82] duration metric: took 4.528706ms for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.151762   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151770   74485 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.155962   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.155984   74485 pod_ready.go:82] duration metric: took 4.203038ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.155996   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.156002   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.159739   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159759   74485 pod_ready.go:82] duration metric: took 3.749616ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.159769   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159777   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.309056   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:46.309441   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:46.309470   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:46.309395   75550 retry.go:31] will retry after 3.741295193s: waiting for machine to come up
	I0818 20:08:50.052617   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053049   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has current primary IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053070   73711 main.go:141] libmachine: (no-preload-944426) Found IP for machine: 192.168.61.228
	I0818 20:08:50.053083   73711 main.go:141] libmachine: (no-preload-944426) Reserving static IP address...
	I0818 20:08:50.053446   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.053467   73711 main.go:141] libmachine: (no-preload-944426) Reserved static IP address: 192.168.61.228
	I0818 20:08:50.053484   73711 main.go:141] libmachine: (no-preload-944426) DBG | skip adding static IP to network mk-no-preload-944426 - found existing host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"}
	I0818 20:08:50.053498   73711 main.go:141] libmachine: (no-preload-944426) DBG | Getting to WaitForSSH function...
	I0818 20:08:50.053510   73711 main.go:141] libmachine: (no-preload-944426) Waiting for SSH to be available...
	I0818 20:08:50.055459   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055790   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.055822   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055911   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH client type: external
	I0818 20:08:50.055939   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa (-rw-------)
	I0818 20:08:50.055971   73711 main.go:141] libmachine: (no-preload-944426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:50.055986   73711 main.go:141] libmachine: (no-preload-944426) DBG | About to run SSH command:
	I0818 20:08:50.055998   73711 main.go:141] libmachine: (no-preload-944426) DBG | exit 0
	I0818 20:08:50.175717   73711 main.go:141] libmachine: (no-preload-944426) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:50.176077   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetConfigRaw
	I0818 20:08:50.176705   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.179072   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179455   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.179486   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179712   73711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:08:50.179900   73711 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:50.179923   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:50.180128   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.182300   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182679   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.182707   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182822   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.183009   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183138   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183292   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.183455   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.183613   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.183623   73711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.739015   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.741282   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:48.165270   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.166500   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:52.667585   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.284037   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:50.284069   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284354   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:08:50.284383   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284503   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.287412   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287774   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.287814   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287965   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.288164   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288352   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288509   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.288669   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.288869   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.288889   73711 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944426 && echo "no-preload-944426" | sudo tee /etc/hostname
	I0818 20:08:50.407844   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944426
	
	I0818 20:08:50.407877   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.410740   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411115   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.411156   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411402   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.411612   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411760   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411869   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.412073   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.412277   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.412299   73711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944426/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:50.521359   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:50.521388   73711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:50.521456   73711 buildroot.go:174] setting up certificates
	I0818 20:08:50.521467   73711 provision.go:84] configureAuth start
	I0818 20:08:50.521481   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.521824   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.524572   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.524975   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.525002   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.525211   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.527350   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527669   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.527697   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527790   73711 provision.go:143] copyHostCerts
	I0818 20:08:50.527856   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:50.527872   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:50.527924   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:50.528038   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:50.528047   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:50.528065   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:50.528119   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:50.528126   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:50.528143   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:50.528192   73711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.no-preload-944426 san=[127.0.0.1 192.168.61.228 localhost minikube no-preload-944426]
	I0818 20:08:50.740892   73711 provision.go:177] copyRemoteCerts
	I0818 20:08:50.740964   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:50.740991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.743676   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744029   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.744059   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744260   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.744494   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.744681   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.744848   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:50.826364   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:50.858459   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:08:50.890910   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:50.918703   73711 provision.go:87] duration metric: took 397.222917ms to configureAuth
	I0818 20:08:50.918730   73711 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:50.918947   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:50.919029   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.922219   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922549   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.922573   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922762   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.922991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923166   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923300   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.923475   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.923683   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.923700   73711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:51.193561   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:51.193588   73711 machine.go:96] duration metric: took 1.013672792s to provisionDockerMachine
	I0818 20:08:51.193603   73711 start.go:293] postStartSetup for "no-preload-944426" (driver="kvm2")
	I0818 20:08:51.193616   73711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:51.193660   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.194032   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:51.194060   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.196422   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196712   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.196747   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196900   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.197046   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.197157   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.197325   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.279007   73711 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:51.283324   73711 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:51.283344   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:51.283424   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:51.283524   73711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:51.283641   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:51.293489   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:51.317415   73711 start.go:296] duration metric: took 123.797891ms for postStartSetup
	I0818 20:08:51.317455   73711 fix.go:56] duration metric: took 20.58515233s for fixHost
	I0818 20:08:51.317479   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.320161   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320452   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.320481   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320667   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.320853   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321027   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321171   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.321322   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:51.321505   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:51.321517   73711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:51.420193   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011731.395088538
	
	I0818 20:08:51.420216   73711 fix.go:216] guest clock: 1724011731.395088538
	I0818 20:08:51.420223   73711 fix.go:229] Guest: 2024-08-18 20:08:51.395088538 +0000 UTC Remote: 2024-08-18 20:08:51.317459873 +0000 UTC m=+356.082724848 (delta=77.628665ms)
	I0818 20:08:51.420240   73711 fix.go:200] guest clock delta is within tolerance: 77.628665ms
	I0818 20:08:51.420256   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 20.687989837s
	I0818 20:08:51.420273   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.420534   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:51.423567   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.423861   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.423888   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.424052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424528   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424690   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424777   73711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:51.424825   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.424916   73711 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:51.424945   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.427482   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427714   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427786   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.427813   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427962   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428080   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.428109   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.428146   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428283   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428342   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428441   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428532   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.428600   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428707   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.528038   73711 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:51.534231   73711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:51.683823   73711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:51.690823   73711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:51.690901   73711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:51.707356   73711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:51.707389   73711 start.go:495] detecting cgroup driver to use...
	I0818 20:08:51.707459   73711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:51.723884   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:51.737661   73711 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:51.737715   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:51.751187   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:51.764367   73711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:51.881664   73711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:52.022183   73711 docker.go:233] disabling docker service ...
	I0818 20:08:52.022250   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:52.037108   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:52.050404   73711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:52.190167   73711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:52.325569   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:52.339546   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:52.358427   73711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:52.358487   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.369570   73711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:52.369629   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.382786   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.396845   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.407797   73711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:52.418649   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.428822   73711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.445799   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.455730   73711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:52.464898   73711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:52.464951   73711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:52.477249   73711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:52.487204   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:52.608922   73711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:52.753849   73711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:52.753918   73711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:52.759116   73711 start.go:563] Will wait 60s for crictl version
	I0818 20:08:52.759175   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:52.763674   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:52.806016   73711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:52.806106   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.833670   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.864310   73711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:52.865447   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:52.868265   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868667   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:52.868699   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868900   73711 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:52.873656   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:52.887328   73711 kubeadm.go:883] updating cluster {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:52.887505   73711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:52.887553   73711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:52.923999   73711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:52.924025   73711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:52.924090   73711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.924097   73711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.924113   73711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.924147   73711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.924216   73711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.924239   73711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:52.924305   73711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.924390   73711 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.925984   73711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.926002   73711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.925994   73711 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 20:08:52.926011   73711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.926053   73711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.926291   73711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.117679   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157566   73711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0818 20:08:53.157608   73711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157655   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.158464   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.161938   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217317   73711 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0818 20:08:53.217374   73711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.217419   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217427   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.229954   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0818 20:08:53.253154   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.253209   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.261450   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.269598   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.270354   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.270401   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.421994   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 20:08:53.422048   73711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0818 20:08:53.422139   73711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.422182   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.422195   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.422052   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.446061   73711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0818 20:08:53.446101   73711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.446100   73711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0818 20:08:53.446114   73711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0818 20:08:53.446158   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446201   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.446161   73711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.446130   73711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.446250   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446280   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.474921   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.474936   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0818 20:08:53.474953   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474995   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474999   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.505782   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.505904   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.505934   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.799739   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.740857   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.167350   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.168652   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.666744   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.666779   74485 pod_ready.go:82] duration metric: took 11.506987195s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.666802   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671280   74485 pod_ready.go:93] pod "kube-proxy-h8bpj" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.671302   74485 pod_ready.go:82] duration metric: took 4.49242ms for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671311   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675745   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.675765   74485 pod_ready.go:82] duration metric: took 4.446707ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675779   74485 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:55.497054   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.022032642s)
	I0818 20:08:55.497090   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0818 20:08:55.497116   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.022155942s)
	I0818 20:08:55.497157   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.022131358s)
	I0818 20:08:55.497168   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0818 20:08:55.497227   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.497273   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.497313   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.991355489s)
	I0818 20:08:55.497274   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.991406662s)
	I0818 20:08:55.497362   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.497369   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.497393   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (1.991466215s)
	I0818 20:08:55.497409   73711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.697646009s)
	I0818 20:08:55.497439   73711 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0818 20:08:55.497455   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:55.497468   73711 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.497504   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:55.590490   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.608567   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.608583   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.608658   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 20:08:55.608707   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.608728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0818 20:08:55.608741   73711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.608756   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:55.608768   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.660747   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 20:08:55.660856   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:08:55.701347   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 20:08:55.701376   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.701433   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:08:55.717056   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 20:08:55.717159   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:08:59.680640   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.071854332s)
	I0818 20:08:59.680673   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0818 20:08:59.680700   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (4.071919945s)
	I0818 20:08:59.680728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0818 20:08:59.680739   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680755   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (4.019877135s)
	I0818 20:08:59.680781   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0818 20:08:59.680792   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.97939667s)
	I0818 20:08:59.680802   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680818   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (3.979373996s)
	I0818 20:08:59.680833   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0818 20:08:59.680847   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:59.680876   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (3.96370085s)
	I0818 20:08:59.680895   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.241463   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:00.241492   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:59.683057   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:02.183113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:01.753708   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.072881673s)
	I0818 20:09:01.753739   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.072859667s)
	I0818 20:09:01.753786   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 20:09:01.753747   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0818 20:09:01.753866   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:01.753870   73711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:01.753922   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:03.515107   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.761161853s)
	I0818 20:09:03.515136   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0818 20:09:03.515142   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.761255334s)
	I0818 20:09:03.515162   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:03.515170   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0818 20:09:03.515223   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.741235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.910002   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.239901   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.682962   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.183678   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:05.463531   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.948279133s)
	I0818 20:09:05.463559   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0818 20:09:05.463585   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:05.463629   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:07.525332   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.061676855s)
	I0818 20:09:07.525365   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0818 20:09:07.525401   73711 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:07.525473   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:08.178855   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 20:09:08.178894   73711 cache_images.go:123] Successfully loaded all cached images
	I0818 20:09:08.178900   73711 cache_images.go:92] duration metric: took 15.254860831s to LoadCachedImages
	I0818 20:09:08.178915   73711 kubeadm.go:934] updating node { 192.168.61.228 8443 v1.31.0 crio true true} ...
	I0818 20:09:08.179070   73711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:09:08.179163   73711 ssh_runner.go:195] Run: crio config
	I0818 20:09:08.229392   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:08.229418   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:08.229429   73711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:09:08.229453   73711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.228 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944426 NodeName:no-preload-944426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:09:08.229598   73711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944426"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:09:08.229657   73711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:09:08.240023   73711 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:09:08.240121   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:09:08.249808   73711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 20:09:08.266663   73711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:09:08.284042   73711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0818 20:09:08.302210   73711 ssh_runner.go:195] Run: grep 192.168.61.228	control-plane.minikube.internal$ /etc/hosts
	I0818 20:09:08.306321   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:09:08.318674   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:08.437701   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:08.462861   73711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426 for IP: 192.168.61.228
	I0818 20:09:08.462889   73711 certs.go:194] generating shared ca certs ...
	I0818 20:09:08.462909   73711 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:08.463099   73711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:09:08.463166   73711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:09:08.463178   73711 certs.go:256] generating profile certs ...
	I0818 20:09:08.463297   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/client.key
	I0818 20:09:08.463400   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key.ec9e396f
	I0818 20:09:08.463459   73711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key
	I0818 20:09:08.463622   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:09:08.463663   73711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:09:08.463676   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:09:08.463718   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:09:08.463748   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:09:08.463780   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:09:08.463827   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:09:08.464500   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:09:08.497860   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:09:08.550536   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:09:08.593972   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:09:08.625691   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 20:09:08.652285   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:09:08.676175   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:09:08.703870   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:09:08.729102   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:09:08.758017   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:09:08.783528   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:09:08.808211   73711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:09:08.825465   73711 ssh_runner.go:195] Run: openssl version
	I0818 20:09:08.831856   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:09:08.843336   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847774   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847824   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.854110   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:09:08.865279   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:09:08.876107   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880723   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880786   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.886526   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:09:08.898139   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:09:08.909258   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.913957   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.914015   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.919888   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:09:08.933118   73711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:09:08.937979   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:09:08.944427   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:09:08.950686   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:09:08.956949   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:09:08.963201   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:09:08.969284   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:09:08.975411   73711 kubeadm.go:392] StartCluster: {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:09:08.975501   73711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:09:08.975543   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.019794   73711 cri.go:89] found id: ""
	I0818 20:09:09.019859   73711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:09:09.030614   73711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:09:09.030635   73711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:09:09.030689   73711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:09:09.041513   73711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:09:09.042532   73711 kubeconfig.go:125] found "no-preload-944426" server: "https://192.168.61.228:8443"
	I0818 20:09:09.044606   73711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:09:09.054823   73711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.228
	I0818 20:09:09.054855   73711 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:09:09.054867   73711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:09:09.054919   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.096324   73711 cri.go:89] found id: ""
	I0818 20:09:09.096412   73711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:09:09.112752   73711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:09:09.122515   73711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:09:09.122537   73711 kubeadm.go:157] found existing configuration files:
	
	I0818 20:09:09.122578   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:09:09.131551   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:09:09.131604   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:09:09.140888   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:09:09.149865   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:09:09.149920   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:09:09.159008   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.168220   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:09:09.168279   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.177638   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:09:09.187508   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:09:09.187567   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:09:09.196657   73711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:09:09.206117   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:09.331465   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.242594   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.738983   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:09.682305   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.683106   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:10.574796   73711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243293266s)
	I0818 20:09:10.574822   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.778850   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.843088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.931752   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:09:10.931846   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.432245   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.932577   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.948423   73711 api_server.go:72] duration metric: took 1.016687944s to wait for apiserver process to appear ...
	I0818 20:09:11.948449   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:09:11.948477   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:11.948946   73711 api_server.go:269] stopped: https://192.168.61.228:8443/healthz: Get "https://192.168.61.228:8443/healthz": dial tcp 192.168.61.228:8443: connect: connection refused
	I0818 20:09:12.448725   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.739963   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.739993   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.740010   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.750388   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.750411   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.948679   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.956174   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:14.956205   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.449273   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.453840   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.453870   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:15.949138   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.958790   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.958813   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:16.449521   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:16.453975   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:09:16.460298   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:09:16.460323   73711 api_server.go:131] duration metric: took 4.511867816s to wait for apiserver health ...
	I0818 20:09:16.460330   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:16.460339   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:16.462141   73711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:09:13.740020   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.238126   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:13.683910   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.182408   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.463457   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:09:16.474867   73711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:09:16.494479   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:09:16.502870   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:09:16.502898   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:09:16.502906   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:09:16.502917   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:09:16.502926   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:09:16.502937   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:09:16.502951   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:09:16.502959   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:09:16.502964   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:09:16.502970   73711 system_pods.go:74] duration metric: took 8.468743ms to wait for pod list to return data ...
	I0818 20:09:16.502977   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:09:16.507863   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:09:16.507884   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:09:16.507893   73711 node_conditions.go:105] duration metric: took 4.912203ms to run NodePressure ...
	I0818 20:09:16.507907   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:16.779765   73711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790746   73711 kubeadm.go:739] kubelet initialised
	I0818 20:09:16.790771   73711 kubeadm.go:740] duration metric: took 10.982299ms waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790780   73711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:16.799544   73711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.806805   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806826   73711 pod_ready.go:82] duration metric: took 7.251632ms for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.806835   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806841   73711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.813614   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813646   73711 pod_ready.go:82] duration metric: took 6.794013ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.813656   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813664   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.818982   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819016   73711 pod_ready.go:82] duration metric: took 5.338981ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.819028   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819037   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.898401   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898433   73711 pod_ready.go:82] duration metric: took 79.37927ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.898446   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898454   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.297663   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297697   73711 pod_ready.go:82] duration metric: took 399.23365ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.297706   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297712   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.697884   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697909   73711 pod_ready.go:82] duration metric: took 400.191092ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.697919   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697925   73711 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:18.099008   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099034   73711 pod_ready.go:82] duration metric: took 401.09908ms for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:18.099044   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099050   73711 pod_ready.go:39] duration metric: took 1.30825923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:18.099071   73711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:09:18.111862   73711 ops.go:34] apiserver oom_adj: -16
	I0818 20:09:18.111888   73711 kubeadm.go:597] duration metric: took 9.081245207s to restartPrimaryControlPlane
	I0818 20:09:18.111901   73711 kubeadm.go:394] duration metric: took 9.136525478s to StartCluster
	I0818 20:09:18.111931   73711 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.112017   73711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:09:18.114460   73711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.114771   73711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:09:18.114885   73711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:09:18.114987   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:09:18.115022   73711 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944426"
	I0818 20:09:18.115036   73711 addons.go:69] Setting default-storageclass=true in profile "no-preload-944426"
	I0818 20:09:18.115059   73711 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944426"
	I0818 20:09:18.115075   73711 addons.go:69] Setting metrics-server=true in profile "no-preload-944426"
	W0818 20:09:18.115082   73711 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:09:18.115095   73711 addons.go:234] Setting addon metrics-server=true in "no-preload-944426"
	I0818 20:09:18.115067   73711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944426"
	W0818 20:09:18.115104   73711 addons.go:243] addon metrics-server should already be in state true
	I0818 20:09:18.115122   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115132   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115517   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115530   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115541   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115553   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115560   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115592   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.117511   73711 out.go:177] * Verifying Kubernetes components...
	I0818 20:09:18.118740   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:18.133596   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0818 20:09:18.134093   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.134661   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.134685   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.135066   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.135263   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.136138   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0818 20:09:18.136520   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.136981   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.137004   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.137353   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.137911   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.137957   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.138952   73711 addons.go:234] Setting addon default-storageclass=true in "no-preload-944426"
	W0818 20:09:18.138975   73711 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:09:18.139001   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.139356   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.139413   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.155618   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0818 20:09:18.156076   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.156666   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.156687   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.157086   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.157669   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.157700   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.158080   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0818 20:09:18.158422   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.158850   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.158868   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.158888   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0818 20:09:18.159237   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.159282   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.159455   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.159741   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.159763   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.160108   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.160582   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.160606   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.165108   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.166977   73711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:09:18.168139   73711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.168156   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:09:18.168174   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.171426   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172004   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.172041   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172082   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.172238   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.172336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.172423   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.175961   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0818 20:09:18.176421   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.176543   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0818 20:09:18.176861   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.176875   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.177065   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.177176   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.177345   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.177745   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.177762   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.178162   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.178336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.179445   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180238   73711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.180253   73711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:09:18.180275   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.181198   73711 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:09:18.182420   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:09:18.182447   73711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:09:18.182464   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.183457   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183499   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.183513   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183656   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.183820   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.183953   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.184112   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.185260   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185575   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.185588   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185754   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.185879   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.186013   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.186099   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.338778   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:18.356229   73711 node_ready.go:35] waiting up to 6m0s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:18.496927   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:09:18.496949   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:09:18.513205   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.540482   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:09:18.540505   73711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:09:18.544078   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.613315   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:18.613340   73711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:09:18.668416   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:19.638171   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094064475s)
	I0818 20:09:19.638274   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638299   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638177   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124933278s)
	I0818 20:09:19.638328   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638343   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638281   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638412   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638697   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638714   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638724   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638732   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638825   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638845   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638853   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638932   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638946   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638966   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638994   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639006   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638893   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.639016   73711 addons.go:475] Verifying addon metrics-server=true in "no-preload-944426"
	I0818 20:09:19.639024   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.639227   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.639401   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639416   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.640889   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.640905   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.640973   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647148   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.647169   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.647416   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.647460   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647448   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.649397   73711 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0818 20:09:19.650643   73711 addons.go:510] duration metric: took 1.535758897s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:18.240003   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.738804   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:18.184836   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.682274   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.360319   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:22.860498   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:22.739478   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.238989   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.239357   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:23.181992   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.182417   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.183071   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.360413   73711 node_ready.go:49] node "no-preload-944426" has status "Ready":"True"
	I0818 20:09:25.360449   73711 node_ready.go:38] duration metric: took 7.004187421s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:25.360462   73711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:25.366498   73711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:27.373766   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.873098   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:29.738977   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.238945   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.683136   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.182825   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:31.873262   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.372828   73711 pod_ready.go:93] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.372849   73711 pod_ready.go:82] duration metric: took 7.006326702s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.372858   73711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376709   73711 pod_ready.go:93] pod "etcd-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.376732   73711 pod_ready.go:82] duration metric: took 3.867173ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376743   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380703   73711 pod_ready.go:93] pod "kube-apiserver-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.380722   73711 pod_ready.go:82] duration metric: took 3.970732ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380733   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385137   73711 pod_ready.go:93] pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.385159   73711 pod_ready.go:82] duration metric: took 4.417483ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385171   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390646   73711 pod_ready.go:93] pod "kube-proxy-2l6g8" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.390702   73711 pod_ready.go:82] duration metric: took 5.522399ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390713   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772352   73711 pod_ready.go:93] pod "kube-scheduler-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.772374   73711 pod_ready.go:82] duration metric: took 381.654122ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772384   73711 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:34.779615   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:34.239095   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.738310   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:34.683291   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:37.181676   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.780369   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.278364   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:38.740139   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.238400   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.181867   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.182275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.781697   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:43.239274   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.739280   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.182744   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.684210   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:46.278261   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.279139   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:47.739502   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.239148   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.181581   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.181939   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.182295   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.779145   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.779195   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.779440   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:52.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:55.239445   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.682549   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.182480   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.278685   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.279456   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.739205   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.739780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:02.240023   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.182638   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.182974   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.778759   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:04.279122   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.240223   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.738829   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:03.183498   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:05.681527   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.682456   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.281289   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:08.778838   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:09.238297   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:11.240258   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.182214   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:12.182485   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.778909   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.279849   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:13.739554   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.240247   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:14.182841   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.682427   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:15.778555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:17.779315   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:18.739993   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.241320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:19.182059   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.681310   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:20.278905   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.779594   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:23.739354   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.739406   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:23.682161   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.683137   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.279240   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:27.778736   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.238417   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.240302   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:28.182371   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.182431   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.182538   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.277705   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.778467   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.739086   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:35.239575   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.682089   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:36.682424   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.277613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:39.778047   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:37.743430   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.240404   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:38.682667   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.182697   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.779091   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.780368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:42.739140   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:44.739349   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.739985   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.681897   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:45.682347   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.682385   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.278368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:48.777847   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:49.238888   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:51.239699   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.182672   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.683492   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.778683   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.778843   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:53.240375   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.738235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.181390   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.181940   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.278430   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.278961   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.778449   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:57.740046   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:00.239797   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.182402   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.182516   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.779677   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.277808   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.740702   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:07.239660   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:03.682270   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.682964   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:06.278085   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.781247   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:09.240113   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.739320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.182148   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:10.681873   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:12.682490   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.278330   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:13.278868   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:14.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:16.739103   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.688049   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.779050   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.779324   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:19.238499   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:21.239793   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.181477   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.181514   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.278360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.779285   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:23.739816   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.243360   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:24.182434   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.182980   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:25.277558   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.278088   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:29.278655   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:28.739673   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.238716   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:28.681620   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:30.682755   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:32.682969   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.778900   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.240237   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.739311   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.182357   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:37.682275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.278357   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:38.279370   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:38.239229   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.240416   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:39.682587   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.187237   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.779225   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.779359   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.779661   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:42.739642   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.240529   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.681898   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:46.683048   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:47.283360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.780428   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:47.739118   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.740073   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.238919   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.182997   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.682384   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.279233   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:54.779470   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.239638   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:56.240123   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:53.682822   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:55.683248   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.279035   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:59.779371   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:58.240182   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.240387   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:58.182085   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.682855   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:02.278506   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:04.279952   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:02.739623   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.239503   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.240563   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:03.182703   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.183099   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.682942   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:06.780051   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:09.283183   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:09.738372   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.738978   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:10.183297   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:12.682617   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.778895   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.779613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:14.239433   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:14.733714   73815 pod_ready.go:82] duration metric: took 4m0.000909376s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:14.733756   73815 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:14.733773   73815 pod_ready.go:39] duration metric: took 4m10.006922238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:14.733798   73815 kubeadm.go:597] duration metric: took 4m18.227938977s to restartPrimaryControlPlane
	W0818 20:12:14.733854   73815 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:14.733884   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:15.182539   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:17.682113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.278810   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:18.279513   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:19.682712   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.182462   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:20.777741   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.779468   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:24.785394   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:24.682001   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.182491   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.278406   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.279272   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.682104   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:32.181795   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:31.779163   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:33.782706   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:34.183088   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.682409   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.278136   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:38.278938   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.943045   73815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.209137834s)
	I0818 20:12:40.943131   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:40.961902   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:40.984956   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:41.000828   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:41.000855   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:41.000908   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:41.019730   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:41.019782   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:41.031694   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:41.052082   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:41.052133   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:41.061682   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.070983   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:41.071036   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.083122   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:41.092977   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:41.093041   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:41.103081   73815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:41.155300   73815 kubeadm.go:310] W0818 20:12:41.112032    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.156131   73815 kubeadm.go:310] W0818 20:12:41.113028    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.270071   73815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:39.183290   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:41.682301   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.777979   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:42.779754   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:44.779992   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:43.683501   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:46.181489   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.616338   73815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:12:49.616432   73815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:12:49.616546   73815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:12:49.616675   73815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:12:49.616784   73815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:12:49.616877   73815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:12:49.618287   73815 out.go:235]   - Generating certificates and keys ...
	I0818 20:12:49.618354   73815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:12:49.618414   73815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:12:49.618486   73815 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:12:49.618537   73815 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:12:49.618598   73815 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:12:49.618648   73815 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:12:49.618700   73815 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:12:49.618779   73815 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:12:49.618892   73815 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:12:49.619007   73815 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:12:49.619065   73815 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:12:49.619163   73815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:12:49.619214   73815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:12:49.619269   73815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:12:49.619331   73815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:12:49.619436   73815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:12:49.619486   73815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:12:49.619556   73815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:12:49.619619   73815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:12:49.621003   73815 out.go:235]   - Booting up control plane ...
	I0818 20:12:49.621109   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:12:49.621195   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:12:49.621272   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:12:49.621380   73815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:12:49.621464   73815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:12:49.621507   73815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:12:49.621621   73815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:12:49.621715   73815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:12:49.621773   73815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.427168ms
	I0818 20:12:49.621843   73815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:12:49.621894   73815 kubeadm.go:310] [api-check] The API server is healthy after 5.00297116s
	I0818 20:12:49.621989   73815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:12:49.622127   73815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:12:49.622192   73815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:12:49.622366   73815 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-291295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:12:49.622416   73815 kubeadm.go:310] [bootstrap-token] Using token: y7e2le.i0q1jk5v0c0u0zuw
	I0818 20:12:49.623896   73815 out.go:235]   - Configuring RBAC rules ...
	I0818 20:12:49.623979   73815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:12:49.624091   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:12:49.624245   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:12:49.624354   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:12:49.624455   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:12:49.624526   73815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:12:49.624621   73815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:12:49.624675   73815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:12:49.624718   73815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:12:49.624724   73815 kubeadm.go:310] 
	I0818 20:12:49.624819   73815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:12:49.624835   73815 kubeadm.go:310] 
	I0818 20:12:49.624933   73815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:12:49.624943   73815 kubeadm.go:310] 
	I0818 20:12:49.624975   73815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:12:49.625066   73815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:12:49.625122   73815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:12:49.625135   73815 kubeadm.go:310] 
	I0818 20:12:49.625210   73815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:12:49.625217   73815 kubeadm.go:310] 
	I0818 20:12:49.625285   73815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:12:49.625295   73815 kubeadm.go:310] 
	I0818 20:12:49.625364   73815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:12:49.625469   73815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:12:49.625552   73815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:12:49.625563   73815 kubeadm.go:310] 
	I0818 20:12:49.625675   73815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:12:49.625756   73815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:12:49.625763   73815 kubeadm.go:310] 
	I0818 20:12:49.625858   73815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.625943   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:12:49.625967   73815 kubeadm.go:310] 	--control-plane 
	I0818 20:12:49.625976   73815 kubeadm.go:310] 
	I0818 20:12:49.626089   73815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:12:49.626099   73815 kubeadm.go:310] 
	I0818 20:12:49.626196   73815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.626293   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:12:49.626302   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:12:49.626308   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:12:49.627714   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:12:47.280266   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.779502   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.628998   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:12:49.639640   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:12:49.657017   73815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-291295 minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=embed-certs-291295 minikube.k8s.io/primary=true
	I0818 20:12:49.685420   73815 ops.go:34] apiserver oom_adj: -16
	I0818 20:12:49.868146   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.368174   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.868256   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.368427   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.868632   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:52.368585   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:48.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:50.681743   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.683179   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.869122   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.368635   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.869162   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.368223   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.490893   73815 kubeadm.go:1113] duration metric: took 4.833865719s to wait for elevateKubeSystemPrivileges
	I0818 20:12:54.490919   73815 kubeadm.go:394] duration metric: took 4m58.032922921s to StartCluster
	I0818 20:12:54.490936   73815 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.491011   73815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:12:54.492769   73815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.493007   73815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:12:54.493069   73815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:12:54.493160   73815 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-291295"
	I0818 20:12:54.493186   73815 addons.go:69] Setting default-storageclass=true in profile "embed-certs-291295"
	I0818 20:12:54.493208   73815 addons.go:69] Setting metrics-server=true in profile "embed-certs-291295"
	I0818 20:12:54.493226   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:12:54.493234   73815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-291295"
	I0818 20:12:54.493250   73815 addons.go:234] Setting addon metrics-server=true in "embed-certs-291295"
	W0818 20:12:54.493263   73815 addons.go:243] addon metrics-server should already be in state true
	I0818 20:12:54.493293   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493197   73815 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-291295"
	W0818 20:12:54.493423   73815 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:12:54.493454   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493667   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493695   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493799   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493824   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493839   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493856   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.494988   73815 out.go:177] * Verifying Kubernetes components...
	I0818 20:12:54.496631   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0818 20:12:54.510362   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0818 20:12:54.510861   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510893   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510904   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.511362   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511394   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511392   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511411   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511512   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511532   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511721   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511770   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511858   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.512040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.512246   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512269   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512275   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.512287   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.515662   73815 addons.go:234] Setting addon default-storageclass=true in "embed-certs-291295"
	W0818 20:12:54.515684   73815 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:12:54.515713   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.516066   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.516113   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.532752   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0818 20:12:54.532798   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0818 20:12:54.533454   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.533570   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.534099   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534122   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534237   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534256   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534374   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534590   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534626   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.534665   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0818 20:12:54.534909   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.535373   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.535793   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.535808   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.536326   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.536411   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.536941   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.538860   73815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:12:54.538862   73815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:12:52.279487   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.279652   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.539061   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.539290   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.540006   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:12:54.540024   73815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:12:54.540043   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.540104   73815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.540119   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:12:54.540144   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.543782   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544017   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544131   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544154   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544293   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544565   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.544734   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.544754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544887   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.545060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.545257   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.545502   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.558292   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0818 20:12:54.558721   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.559184   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.559200   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.559579   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.559764   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.561412   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.562138   73815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.562153   73815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:12:54.562169   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.565078   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565524   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.565543   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565782   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.565954   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.566107   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.566265   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.738286   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:12:54.804581   73815 node_ready.go:35] waiting up to 6m0s for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813953   73815 node_ready.go:49] node "embed-certs-291295" has status "Ready":"True"
	I0818 20:12:54.813984   73815 node_ready.go:38] duration metric: took 9.367719ms for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813995   73815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:54.820670   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:12:54.884787   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:12:54.884808   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:12:54.891500   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.917894   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.939854   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:12:54.939873   73815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:12:55.023663   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:55.023684   73815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:12:55.049846   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:56.106099   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.188173933s)
	I0818 20:12:56.106164   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106173   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106502   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106504   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.106519   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.106529   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106537   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106774   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106788   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107412   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21588373s)
	I0818 20:12:56.107447   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107459   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.107656   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.107729   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.107739   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107747   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.108054   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.108095   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.108105   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.163788   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.163816   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.164087   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.164137   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239269   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189381338s)
	I0818 20:12:56.239327   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239341   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.239712   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.239767   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239748   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.239782   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239792   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.240000   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.240017   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.240028   73815 addons.go:475] Verifying addon metrics-server=true in "embed-certs-291295"
	I0818 20:12:56.241750   73815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:12:56.243157   73815 addons.go:510] duration metric: took 1.750082977s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:12:56.827912   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:55.184449   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:57.676039   74485 pod_ready.go:82] duration metric: took 4m0.000245975s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:57.676064   74485 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:57.676106   74485 pod_ready.go:39] duration metric: took 4m11.533331444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:57.676138   74485 kubeadm.go:597] duration metric: took 4m20.628972956s to restartPrimaryControlPlane
	W0818 20:12:57.676203   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:57.676230   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:56.778171   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:58.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:59.328683   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.331560   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.281134   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.281507   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.828543   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.828572   73815 pod_ready.go:82] duration metric: took 9.007869564s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.828586   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833396   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.833416   73815 pod_ready.go:82] duration metric: took 4.823533ms for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833426   73815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837837   73815 pod_ready.go:93] pod "etcd-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.837856   73815 pod_ready.go:82] duration metric: took 4.422926ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837864   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842646   73815 pod_ready.go:93] pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.842666   73815 pod_ready.go:82] duration metric: took 4.795789ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842675   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846697   73815 pod_ready.go:93] pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.846721   73815 pod_ready.go:82] duration metric: took 4.038999ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846733   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224066   73815 pod_ready.go:93] pod "kube-proxy-8mv85" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.224088   73815 pod_ready.go:82] duration metric: took 377.347897ms for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224097   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624310   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.624337   73815 pod_ready.go:82] duration metric: took 400.233574ms for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624347   73815 pod_ready.go:39] duration metric: took 9.810340936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:04.624363   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:04.624440   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:04.640514   73815 api_server.go:72] duration metric: took 10.147475745s to wait for apiserver process to appear ...
	I0818 20:13:04.640543   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:04.640565   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:13:04.646120   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:13:04.646969   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:04.646989   73815 api_server.go:131] duration metric: took 6.438722ms to wait for apiserver health ...
	I0818 20:13:04.646999   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:04.828347   73815 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:04.828385   73815 system_pods.go:61] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:04.828393   73815 system_pods.go:61] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:04.828398   73815 system_pods.go:61] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:04.828403   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:04.828416   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:04.828420   73815 system_pods.go:61] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:04.828425   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:04.828434   73815 system_pods.go:61] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:04.828441   73815 system_pods.go:61] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:04.828453   73815 system_pods.go:74] duration metric: took 181.44906ms to wait for pod list to return data ...
	I0818 20:13:04.828465   73815 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:05.030945   73815 default_sa.go:45] found service account: "default"
	I0818 20:13:05.030971   73815 default_sa.go:55] duration metric: took 202.497269ms for default service account to be created ...
	I0818 20:13:05.030981   73815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:05.226724   73815 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:05.226760   73815 system_pods.go:89] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:05.226769   73815 system_pods.go:89] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:05.226775   73815 system_pods.go:89] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:05.226781   73815 system_pods.go:89] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:05.226790   73815 system_pods.go:89] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:05.226795   73815 system_pods.go:89] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:05.226801   73815 system_pods.go:89] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:05.226810   73815 system_pods.go:89] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:05.226820   73815 system_pods.go:89] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:05.226831   73815 system_pods.go:126] duration metric: took 195.843628ms to wait for k8s-apps to be running ...
	I0818 20:13:05.226843   73815 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:05.226892   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:05.242656   73815 system_svc.go:56] duration metric: took 15.80684ms WaitForService to wait for kubelet
	I0818 20:13:05.242681   73815 kubeadm.go:582] duration metric: took 10.749648174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:05.242698   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:05.424616   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:05.424642   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:05.424654   73815 node_conditions.go:105] duration metric: took 181.951421ms to run NodePressure ...
	I0818 20:13:05.424668   73815 start.go:241] waiting for startup goroutines ...
	I0818 20:13:05.424678   73815 start.go:246] waiting for cluster config update ...
	I0818 20:13:05.424692   73815 start.go:255] writing updated cluster config ...
	I0818 20:13:05.425003   73815 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:05.470859   73815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:05.472909   73815 out.go:177] * Done! kubectl is now configured to use "embed-certs-291295" cluster and "default" namespace by default
	I0818 20:13:05.779555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:07.783567   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:10.281617   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:12.780570   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:15.282024   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:17.779399   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:23.788389   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.112134895s)
	I0818 20:13:23.788470   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:23.808611   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:13:23.820139   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:13:23.837253   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:13:23.837282   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:13:23.837345   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:13:23.848522   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:13:23.848595   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:13:23.857891   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:13:23.866756   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:13:23.866814   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:13:23.876332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.885435   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:13:23.885535   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.896120   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:13:23.905471   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:13:23.905565   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:13:23.915157   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:13:23.963756   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:13:23.963830   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:13:24.083423   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:13:24.083592   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:13:24.083733   74485 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:13:24.097967   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:13:24.099859   74485 out.go:235]   - Generating certificates and keys ...
	I0818 20:13:24.099926   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:13:24.100020   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:13:24.100125   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:13:24.100212   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:13:24.100310   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:13:24.100389   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:13:24.100476   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:13:24.100592   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:13:24.100711   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:13:24.100829   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:13:24.100891   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:13:24.100978   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:13:24.298737   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:13:24.592511   74485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:13:24.686316   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:13:24.796124   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:13:24.910646   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:13:24.911060   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:13:24.913486   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:13:20.281479   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:22.779269   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:24.914894   74485 out.go:235]   - Booting up control plane ...
	I0818 20:13:24.915018   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:13:24.915106   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:13:24.915303   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:13:24.938289   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:13:24.944304   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:13:24.944367   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:13:25.078685   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:13:25.078813   74485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:13:25.580725   74485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092954ms
	I0818 20:13:25.580847   74485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:13:25.280695   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:27.285875   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:29.779058   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:30.583574   74485 kubeadm.go:310] [api-check] The API server is healthy after 5.001121585s
	I0818 20:13:30.596453   74485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:13:30.616459   74485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:13:30.647753   74485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:13:30.648063   74485 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-852598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:13:30.661702   74485 kubeadm.go:310] [bootstrap-token] Using token: zx02gp.uvda3nvhhfc3i2l5
	I0818 20:13:30.663166   74485 out.go:235]   - Configuring RBAC rules ...
	I0818 20:13:30.663321   74485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:13:30.671440   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:13:30.682462   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:13:30.690376   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:13:30.699091   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:13:30.704304   74485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:13:30.989576   74485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:13:31.435191   74485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:13:31.989155   74485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:13:31.991090   74485 kubeadm.go:310] 
	I0818 20:13:31.991172   74485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:13:31.991188   74485 kubeadm.go:310] 
	I0818 20:13:31.991285   74485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:13:31.991303   74485 kubeadm.go:310] 
	I0818 20:13:31.991337   74485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:13:31.991506   74485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:13:31.991584   74485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:13:31.991605   74485 kubeadm.go:310] 
	I0818 20:13:31.991710   74485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:13:31.991732   74485 kubeadm.go:310] 
	I0818 20:13:31.991802   74485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:13:31.991814   74485 kubeadm.go:310] 
	I0818 20:13:31.991881   74485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:13:31.991986   74485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:13:31.992101   74485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:13:31.992132   74485 kubeadm.go:310] 
	I0818 20:13:31.992250   74485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:13:31.992345   74485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:13:31.992358   74485 kubeadm.go:310] 
	I0818 20:13:31.992464   74485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.992601   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:13:31.992637   74485 kubeadm.go:310] 	--control-plane 
	I0818 20:13:31.992650   74485 kubeadm.go:310] 
	I0818 20:13:31.992760   74485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:13:31.992778   74485 kubeadm.go:310] 
	I0818 20:13:31.992882   74485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.993030   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:13:31.994898   74485 kubeadm.go:310] W0818 20:13:23.918436    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995217   74485 kubeadm.go:310] W0818 20:13:23.919152    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995365   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:13:31.995413   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:13:31.995423   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:13:31.997188   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:13:31.998506   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:13:32.011472   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:13:32.031405   74485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:13:32.031449   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.031494   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-852598 minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=default-k8s-diff-port-852598 minikube.k8s.io/primary=true
	I0818 20:13:32.244997   74485 ops.go:34] apiserver oom_adj: -16
	I0818 20:13:32.245096   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.745775   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.279538   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:32.779152   73711 pod_ready.go:82] duration metric: took 4m0.006755386s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:13:32.779180   73711 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 20:13:32.779190   73711 pod_ready.go:39] duration metric: took 4m7.418715902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:32.779207   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:32.779240   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:32.779298   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:32.848109   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:32.848132   73711 cri.go:89] found id: ""
	I0818 20:13:32.848141   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:32.848201   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.852725   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:32.852789   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:32.899932   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:32.899957   73711 cri.go:89] found id: ""
	I0818 20:13:32.899969   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:32.900028   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.904698   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:32.904771   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:32.945320   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:32.945347   73711 cri.go:89] found id: ""
	I0818 20:13:32.945355   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:32.945411   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.949873   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:32.949935   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:32.986388   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:32.986409   73711 cri.go:89] found id: ""
	I0818 20:13:32.986415   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:32.986465   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.992213   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:32.992292   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:33.035535   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:33.035557   73711 cri.go:89] found id: ""
	I0818 20:13:33.035564   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:33.035622   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.039933   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:33.040006   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:33.077372   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:33.077395   73711 cri.go:89] found id: ""
	I0818 20:13:33.077404   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:33.077468   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.082254   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:33.082327   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:33.120142   73711 cri.go:89] found id: ""
	I0818 20:13:33.120181   73711 logs.go:276] 0 containers: []
	W0818 20:13:33.120192   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:33.120199   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:33.120267   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:33.159065   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:33.159089   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.159095   73711 cri.go:89] found id: ""
	I0818 20:13:33.159104   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:33.159164   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.163366   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.167301   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:33.167327   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:33.207982   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:33.208012   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:33.734525   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:33.734563   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:33.779286   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:33.779334   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:33.915330   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:33.915365   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:33.930057   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:33.930088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:33.978282   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:33.978312   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:34.021464   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:34.021495   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:34.058242   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:34.058271   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:34.094203   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:34.094231   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:34.157812   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:34.157849   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:34.196259   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:34.196288   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:34.273774   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:34.273818   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.245388   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:33.745166   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.245920   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.745548   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.245436   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.745269   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.245383   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.384146   74485 kubeadm.go:1113] duration metric: took 4.352781371s to wait for elevateKubeSystemPrivileges
	I0818 20:13:36.384182   74485 kubeadm.go:394] duration metric: took 4m59.395903283s to StartCluster
	I0818 20:13:36.384199   74485 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.384286   74485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:13:36.385964   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.386201   74485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:13:36.386320   74485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:13:36.386400   74485 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386423   74485 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386440   74485 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386458   74485 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386470   74485 addons.go:243] addon metrics-server should already be in state true
	I0818 20:13:36.386477   74485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-852598"
	I0818 20:13:36.386514   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386434   74485 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386567   74485 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:13:36.386612   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386435   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:13:36.386858   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386887   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386915   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386948   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386982   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.387015   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.387748   74485 out.go:177] * Verifying Kubernetes components...
	I0818 20:13:36.389177   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:13:36.402895   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0818 20:13:36.402928   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0818 20:13:36.403477   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.403479   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404111   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404120   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404519   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404525   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404795   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.405161   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.405192   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.405739   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0818 20:13:36.406246   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.406753   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.406779   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.407167   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.407726   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.407771   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.408687   74485 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.408710   74485 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:13:36.408736   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.409073   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.409120   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.423471   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 20:13:36.423953   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.424569   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.424588   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.424652   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0818 20:13:36.424966   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.425039   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.425257   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.425447   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.425462   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.425911   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.426098   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.427104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.427772   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.428108   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0818 20:13:36.428438   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.428794   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.428816   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.429092   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.429645   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.429696   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.429708   74485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:13:36.429758   74485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:13:36.431859   74485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.431879   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:13:36.431898   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.431958   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:13:36.431969   74485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:13:36.431983   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.435295   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435730   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.435757   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436192   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.436238   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.436254   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.436312   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.436528   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436570   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.436890   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.437171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.437355   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.447762   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0818 20:13:36.448303   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.448694   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.448713   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.449011   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.449160   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.450722   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.450918   74485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.450935   74485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:13:36.450954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.453529   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.453969   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.453992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.454163   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.454862   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.455104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.455246   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.606178   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:13:36.628852   74485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702927   74485 node_ready.go:49] node "default-k8s-diff-port-852598" has status "Ready":"True"
	I0818 20:13:36.702956   74485 node_ready.go:38] duration metric: took 74.077289ms for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702968   74485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:36.713446   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:36.726670   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:13:36.726689   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:13:36.741673   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.784451   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.790772   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:13:36.790798   74485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:13:36.845289   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:36.845315   74485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:13:36.914259   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:37.542511   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542538   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542559   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542543   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542874   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542914   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542922   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542932   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542941   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542953   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542963   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542971   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.543114   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.543123   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545016   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.545041   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545059   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572618   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.572643   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.572953   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572976   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.572989   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.793891   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.793918   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794436   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.794453   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794467   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794479   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.794487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794747   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794762   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794774   74485 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-852598"
	I0818 20:13:37.796423   74485 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:13:36.814874   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:36.838208   73711 api_server.go:72] duration metric: took 4m18.723396382s to wait for apiserver process to appear ...
	I0818 20:13:36.838234   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:36.838276   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:36.838334   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:36.890010   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:36.890036   73711 cri.go:89] found id: ""
	I0818 20:13:36.890046   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:36.890108   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.895675   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:36.895753   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:36.953110   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:36.953162   73711 cri.go:89] found id: ""
	I0818 20:13:36.953172   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:36.953230   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.959359   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:36.959456   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:37.011217   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.011248   73711 cri.go:89] found id: ""
	I0818 20:13:37.011258   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:37.011333   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.016895   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:37.016988   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:37.067705   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:37.067728   73711 cri.go:89] found id: ""
	I0818 20:13:37.067737   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:37.067794   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.073259   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:37.073332   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:37.112192   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.112216   73711 cri.go:89] found id: ""
	I0818 20:13:37.112226   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:37.112285   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.116988   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:37.117060   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:37.153720   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.153744   73711 cri.go:89] found id: ""
	I0818 20:13:37.153753   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:37.153811   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.158160   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:37.158226   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:37.197088   73711 cri.go:89] found id: ""
	I0818 20:13:37.197120   73711 logs.go:276] 0 containers: []
	W0818 20:13:37.197143   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:37.197151   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:37.197215   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:37.241214   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:37.241242   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:37.241248   73711 cri.go:89] found id: ""
	I0818 20:13:37.241257   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:37.241317   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.246159   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.250431   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:37.250460   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:37.313787   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:37.313817   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:37.333235   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:37.333263   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:37.461197   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:37.461236   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.505314   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:37.505343   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.576096   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:37.576121   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:38.083667   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:38.083702   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:38.128922   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:38.128947   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:38.170807   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:38.170842   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:38.265750   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:38.265784   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:38.323224   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:38.323269   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:38.372486   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:38.372530   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:38.413945   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:38.413986   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.798152   74485 addons.go:510] duration metric: took 1.411833485s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:13:38.719805   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.720446   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:40.720472   74485 pod_ready.go:82] duration metric: took 4.00699808s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:40.720482   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:42.728159   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.955186   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:13:40.960201   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:13:40.961240   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:40.961260   73711 api_server.go:131] duration metric: took 4.123017717s to wait for apiserver health ...
	I0818 20:13:40.961273   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:40.961298   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:40.961350   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:41.012093   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.012113   73711 cri.go:89] found id: ""
	I0818 20:13:41.012121   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:41.012172   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.016282   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:41.016337   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:41.063834   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.063861   73711 cri.go:89] found id: ""
	I0818 20:13:41.063871   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:41.063930   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.068645   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:41.068724   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:41.117544   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:41.117565   73711 cri.go:89] found id: ""
	I0818 20:13:41.117573   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:41.117626   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.121916   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:41.121985   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:41.161641   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:41.161660   73711 cri.go:89] found id: ""
	I0818 20:13:41.161667   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:41.161720   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.165727   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:41.165778   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:41.207519   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:41.207544   73711 cri.go:89] found id: ""
	I0818 20:13:41.207554   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:41.207615   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.212114   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:41.212171   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:41.255480   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:41.255501   73711 cri.go:89] found id: ""
	I0818 20:13:41.255508   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:41.255560   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.259585   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:41.259635   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:41.312099   73711 cri.go:89] found id: ""
	I0818 20:13:41.312124   73711 logs.go:276] 0 containers: []
	W0818 20:13:41.312131   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:41.312137   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:41.312201   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:41.358622   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:41.358647   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.358653   73711 cri.go:89] found id: ""
	I0818 20:13:41.358662   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:41.358723   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.363210   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.367271   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:41.367294   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.406329   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:41.406355   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:41.768140   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:41.768175   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:41.811010   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:41.811035   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:41.886206   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:41.886240   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.938249   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:41.938284   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.977289   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:41.977317   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:42.018606   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:42.018630   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:42.055557   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:42.055581   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:42.070467   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:42.070494   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:42.182068   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:42.182100   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:42.219346   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:42.219373   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:42.262193   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:42.262221   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:44.839152   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:13:44.839181   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.839186   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.839191   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.839194   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.839197   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.839200   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.839206   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.839212   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.839218   73711 system_pods.go:74] duration metric: took 3.877940537s to wait for pod list to return data ...
	I0818 20:13:44.839225   73711 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:44.841877   73711 default_sa.go:45] found service account: "default"
	I0818 20:13:44.841896   73711 default_sa.go:55] duration metric: took 2.662355ms for default service account to be created ...
	I0818 20:13:44.841904   73711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:44.846214   73711 system_pods.go:86] 8 kube-system pods found
	I0818 20:13:44.846240   73711 system_pods.go:89] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.846247   73711 system_pods.go:89] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.846252   73711 system_pods.go:89] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.846259   73711 system_pods.go:89] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.846264   73711 system_pods.go:89] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.846269   73711 system_pods.go:89] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.846279   73711 system_pods.go:89] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.846286   73711 system_pods.go:89] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.846296   73711 system_pods.go:126] duration metric: took 4.386348ms to wait for k8s-apps to be running ...
	I0818 20:13:44.846305   73711 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:44.846356   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:44.863225   73711 system_svc.go:56] duration metric: took 16.912117ms WaitForService to wait for kubelet
	I0818 20:13:44.863262   73711 kubeadm.go:582] duration metric: took 4m26.748456958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:44.863287   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:44.866049   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:44.866069   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:44.866082   73711 node_conditions.go:105] duration metric: took 2.789471ms to run NodePressure ...
	I0818 20:13:44.866095   73711 start.go:241] waiting for startup goroutines ...
	I0818 20:13:44.866103   73711 start.go:246] waiting for cluster config update ...
	I0818 20:13:44.866135   73711 start.go:255] writing updated cluster config ...
	I0818 20:13:44.866415   73711 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:44.914902   73711 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:44.916929   73711 out.go:177] * Done! kubectl is now configured to use "no-preload-944426" cluster and "default" namespace by default
	I0818 20:13:45.226521   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:47.226773   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:48.227026   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.227050   74485 pod_ready.go:82] duration metric: took 7.506560684s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.227061   74485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231313   74485 pod_ready.go:93] pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.231336   74485 pod_ready.go:82] duration metric: took 4.268255ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231345   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235228   74485 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.235249   74485 pod_ready.go:82] duration metric: took 3.897729ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235259   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238872   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.238889   74485 pod_ready.go:82] duration metric: took 3.623044ms for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238897   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243264   74485 pod_ready.go:93] pod "kube-proxy-hmvsl" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.243282   74485 pod_ready.go:82] duration metric: took 4.378808ms for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243292   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625076   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.625101   74485 pod_ready.go:82] duration metric: took 381.800619ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625111   74485 pod_ready.go:39] duration metric: took 11.92213071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:48.625128   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:48.625193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:48.640038   74485 api_server.go:72] duration metric: took 12.253809178s to wait for apiserver process to appear ...
	I0818 20:13:48.640061   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:48.640081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:13:48.644433   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:13:48.645289   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:48.645306   74485 api_server.go:131] duration metric: took 5.239358ms to wait for apiserver health ...
	I0818 20:13:48.645313   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:48.829655   74485 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:48.829698   74485 system_pods.go:61] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:48.829706   74485 system_pods.go:61] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:48.829718   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:48.829726   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:48.829731   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:48.829737   74485 system_pods.go:61] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:48.829742   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:48.829754   74485 system_pods.go:61] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:48.829760   74485 system_pods.go:61] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:48.829770   74485 system_pods.go:74] duration metric: took 184.451133ms to wait for pod list to return data ...
	I0818 20:13:48.829783   74485 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:49.023954   74485 default_sa.go:45] found service account: "default"
	I0818 20:13:49.023982   74485 default_sa.go:55] duration metric: took 194.191689ms for default service account to be created ...
	I0818 20:13:49.023992   74485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:49.227864   74485 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:49.227892   74485 system_pods.go:89] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:49.227898   74485 system_pods.go:89] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:49.227902   74485 system_pods.go:89] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:49.227907   74485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:49.227911   74485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:49.227915   74485 system_pods.go:89] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:49.227918   74485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:49.227925   74485 system_pods.go:89] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:49.227930   74485 system_pods.go:89] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:49.227936   74485 system_pods.go:126] duration metric: took 203.939768ms to wait for k8s-apps to be running ...
	I0818 20:13:49.227945   74485 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:49.227989   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:49.242762   74485 system_svc.go:56] duration metric: took 14.808746ms WaitForService to wait for kubelet
	I0818 20:13:49.242793   74485 kubeadm.go:582] duration metric: took 12.856565711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:49.242819   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:49.425517   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:49.425543   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:49.425555   74485 node_conditions.go:105] duration metric: took 182.731125ms to run NodePressure ...
	I0818 20:13:49.425569   74485 start.go:241] waiting for startup goroutines ...
	I0818 20:13:49.425577   74485 start.go:246] waiting for cluster config update ...
	I0818 20:13:49.425588   74485 start.go:255] writing updated cluster config ...
	I0818 20:13:49.425898   74485 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:49.473176   74485 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:49.475285   74485 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-852598" cluster and "default" namespace by default
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 
	
	
	==> CRI-O <==
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.387308814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012723387282261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac08a36f-9a9c-446f-9983-48c4dbebc5e6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.387961818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8e13b1b-2016-4737-88f4-ae0308530799 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.388032641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8e13b1b-2016-4737-88f4-ae0308530799 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.388072086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d8e13b1b-2016-4737-88f4-ae0308530799 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.419898922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79ea7ee8-ef76-43f9-98af-cabc93d2c478 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.419988193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79ea7ee8-ef76-43f9-98af-cabc93d2c478 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.421307553Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=342e9db4-358b-45be-9636-e614ce5f04fc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.421739947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012723421720005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=342e9db4-358b-45be-9636-e614ce5f04fc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.422552813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aff949e7-09a2-4b02-bf48-5b020f4abca2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.422599226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aff949e7-09a2-4b02-bf48-5b020f4abca2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.422633853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aff949e7-09a2-4b02-bf48-5b020f4abca2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.458225740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89fe7107-2040-4eeb-897d-c55954a6417b name=/runtime.v1.RuntimeService/Version
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.458313935Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89fe7107-2040-4eeb-897d-c55954a6417b name=/runtime.v1.RuntimeService/Version
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.459340379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed76f464-53cc-4d16-ac6c-90cbc4658ed3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.459799400Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012723459775732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed76f464-53cc-4d16-ac6c-90cbc4658ed3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.460400495Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbcbd5c4-9c60-4aa1-9d95-bd5c594b5624 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.460513224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbcbd5c4-9c60-4aa1-9d95-bd5c594b5624 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.460575553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fbcbd5c4-9c60-4aa1-9d95-bd5c594b5624 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.492913159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a878df3d-3147-48b3-a5a8-94e8b40f91b5 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.493025757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a878df3d-3147-48b3-a5a8-94e8b40f91b5 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.494244464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dfc09e5-3ad7-40b6-8703-5a65f7e11001 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.494693379Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012723494664698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dfc09e5-3ad7-40b6-8703-5a65f7e11001 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.495203305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e3887e3-148a-429c-b3a5-28f8a8a24c66 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.495281134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e3887e3-148a-429c-b3a5-28f8a8a24c66 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:25:23 old-k8s-version-247539 crio[653]: time="2024-08-18 20:25:23.495316613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4e3887e3-148a-429c-b3a5-28f8a8a24c66 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug18 20:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051405] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041581] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.935576] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug18 20:08] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.637295] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.911494] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.071095] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080090] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.174365] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.151707] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.249665] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.351764] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.067129] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.161515] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[ +12.130980] kauditd_printk_skb: 46 callbacks suppressed
	[Aug18 20:12] systemd-fstab-generator[5096]: Ignoring "noauto" option for root device
	[Aug18 20:14] systemd-fstab-generator[5379]: Ignoring "noauto" option for root device
	[  +0.062456] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:25:23 up 17 min,  0 users,  load average: 0.10, 0.06, 0.04
	Linux old-k8s-version-247539 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc000be6118, 0x9, 0x9, 0x4f04880, 0xc0001a2900, 0x0, 0x0, 0x0, 0x0)
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000be60e0, 0xc000bf8390, 0xc000bf8390, 0x0, 0x0)
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0002aec40)
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]: goroutine 112 [runnable]:
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000928a00, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001a2ae0, 0x0, 0x0)
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0002aec40)
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6562]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 18 20:25:18 old-k8s-version-247539 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Aug 18 20:25:18 old-k8s-version-247539 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 18 20:25:18 old-k8s-version-247539 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6571]: I0818 20:25:18.824805    6571 server.go:416] Version: v1.20.0
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6571]: I0818 20:25:18.825217    6571 server.go:837] Client rotation is on, will bootstrap in background
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6571]: I0818 20:25:18.827215    6571 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6571]: I0818 20:25:18.828391    6571 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 18 20:25:18 old-k8s-version-247539 kubelet[6571]: W0818 20:25:18.828669    6571 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 2 (224.101016ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-247539" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (381.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291295 -n embed-certs-291295
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-18 20:28:27.947653753 +0000 UTC m=+6618.339993017
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-291295 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-291295 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.329µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-291295 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291295 -n embed-certs-291295
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-291295 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-291295 logs -n 25: (2.074670757s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p newest-cni-868662                  | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-868662 --memory=2200 --alsologtostderr   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-291295            | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-868662 image list                           | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:02 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-852598  | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-247539        | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944426                  | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-291295                 | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:03 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-852598       | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-247539             | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:13 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:28 UTC | 18 Aug 24 20:28 UTC |
	| delete  | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:28 UTC | 18 Aug 24 20:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 20:04:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 20:04:42.787579   74485 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:42.787666   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787673   74485 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:42.787677   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787847   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:42.788352   74485 out.go:352] Setting JSON to false
	I0818 20:04:42.789201   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6427,"bootTime":1724005056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:42.789257   74485 start.go:139] virtualization: kvm guest
	I0818 20:04:42.791538   74485 out.go:177] * [default-k8s-diff-port-852598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:42.793185   74485 notify.go:220] Checking for updates...
	I0818 20:04:42.793204   74485 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:42.794555   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:42.795955   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:42.797158   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:42.798459   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:42.799775   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:42.801373   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:04:42.801763   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.801823   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.816564   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0818 20:04:42.816964   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.817465   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.817486   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.817807   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.818015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.818224   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:42.818511   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.818540   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.832964   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0818 20:04:42.833369   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.833866   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.833895   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.834252   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.834438   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.867522   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:42.868931   74485 start.go:297] selected driver: kvm2
	I0818 20:04:42.868948   74485 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.869074   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:42.869754   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.869835   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:42.884983   74485 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:42.885345   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:42.885408   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:04:42.885421   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:42.885450   74485 start.go:340] cluster config:
	{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.885567   74485 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.887511   74485 out.go:177] * Starting "default-k8s-diff-port-852598" primary control-plane node in "default-k8s-diff-port-852598" cluster
	I0818 20:04:42.011628   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:45.083629   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:42.888803   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:04:42.888828   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:42.888834   74485 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:42.888903   74485 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:42.888913   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 20:04:42.888991   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:04:42.889163   74485 start.go:360] acquireMachinesLock for default-k8s-diff-port-852598: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:04:51.163614   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:54.235770   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:00.315808   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:03.387719   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:09.467686   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:12.539667   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:18.619652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:21.691652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:27.771635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:30.843627   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:36.923644   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:39.995678   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:46.075611   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:49.147665   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:55.227683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:58.299638   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:04.379690   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:07.451735   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:13.531669   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:16.603729   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:22.683639   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:25.755659   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:31.835708   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:34.907693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:40.987635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:44.059673   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:50.139693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:53.211683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:59.291707   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:02.363660   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:08.443634   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:11.515633   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:17.595640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:20.667689   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:26.747640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:29.819663   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:32.823816   73815 start.go:364] duration metric: took 4m30.025550701s to acquireMachinesLock for "embed-certs-291295"
	I0818 20:07:32.823869   73815 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:32.823875   73815 fix.go:54] fixHost starting: 
	I0818 20:07:32.824270   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:32.824306   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:32.839755   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0818 20:07:32.840171   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:32.840614   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:07:32.840632   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:32.840962   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:32.841160   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:32.841303   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:07:32.842786   73815 fix.go:112] recreateIfNeeded on embed-certs-291295: state=Stopped err=<nil>
	I0818 20:07:32.842814   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	W0818 20:07:32.842974   73815 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:32.844743   73815 out.go:177] * Restarting existing kvm2 VM for "embed-certs-291295" ...
	I0818 20:07:32.821304   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:32.821364   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821657   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:07:32.821683   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821904   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:07:32.823683   73711 machine.go:96] duration metric: took 4m37.430465042s to provisionDockerMachine
	I0818 20:07:32.823720   73711 fix.go:56] duration metric: took 4m37.451071449s for fixHost
	I0818 20:07:32.823727   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 4m37.451091077s
	W0818 20:07:32.823754   73711 start.go:714] error starting host: provision: host is not running
	W0818 20:07:32.823846   73711 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0818 20:07:32.823855   73711 start.go:729] Will try again in 5 seconds ...
	I0818 20:07:32.846149   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Start
	I0818 20:07:32.846317   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring networks are active...
	I0818 20:07:32.847049   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network default is active
	I0818 20:07:32.847478   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network mk-embed-certs-291295 is active
	I0818 20:07:32.847854   73815 main.go:141] libmachine: (embed-certs-291295) Getting domain xml...
	I0818 20:07:32.848748   73815 main.go:141] libmachine: (embed-certs-291295) Creating domain...
	I0818 20:07:34.053380   73815 main.go:141] libmachine: (embed-certs-291295) Waiting to get IP...
	I0818 20:07:34.054322   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.054765   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.054850   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.054751   75081 retry.go:31] will retry after 299.809444ms: waiting for machine to come up
	I0818 20:07:34.356537   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.356955   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.357014   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.356932   75081 retry.go:31] will retry after 366.714086ms: waiting for machine to come up
	I0818 20:07:34.725440   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.725885   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.725915   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.725839   75081 retry.go:31] will retry after 427.074526ms: waiting for machine to come up
	I0818 20:07:35.154258   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.154660   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.154682   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.154633   75081 retry.go:31] will retry after 565.117984ms: waiting for machine to come up
	I0818 20:07:35.721302   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.721729   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.721757   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.721686   75081 retry.go:31] will retry after 630.987814ms: waiting for machine to come up
	I0818 20:07:36.354566   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:36.354981   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:36.355016   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:36.354951   75081 retry.go:31] will retry after 697.865559ms: waiting for machine to come up
	I0818 20:07:37.054868   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.055232   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.055260   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.055188   75081 retry.go:31] will retry after 898.995052ms: waiting for machine to come up
	I0818 20:07:37.824187   73711 start.go:360] acquireMachinesLock for no-preload-944426: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:37.955672   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.956089   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.956115   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.956038   75081 retry.go:31] will retry after 1.482185836s: waiting for machine to come up
	I0818 20:07:39.440488   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:39.440838   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:39.440889   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:39.440794   75081 retry.go:31] will retry after 1.695604547s: waiting for machine to come up
	I0818 20:07:41.138708   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:41.139203   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:41.139231   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:41.139166   75081 retry.go:31] will retry after 1.806916927s: waiting for machine to come up
	I0818 20:07:42.947942   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:42.948344   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:42.948402   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:42.948319   75081 retry.go:31] will retry after 2.664923271s: waiting for machine to come up
	I0818 20:07:45.616102   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:45.616454   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:45.616482   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:45.616411   75081 retry.go:31] will retry after 3.460207847s: waiting for machine to come up
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:49.078185   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078646   73815 main.go:141] libmachine: (embed-certs-291295) Found IP for machine: 192.168.39.125
	I0818 20:07:49.078676   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has current primary IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078682   73815 main.go:141] libmachine: (embed-certs-291295) Reserving static IP address...
	I0818 20:07:49.079061   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.079091   73815 main.go:141] libmachine: (embed-certs-291295) Reserved static IP address: 192.168.39.125
	I0818 20:07:49.079112   73815 main.go:141] libmachine: (embed-certs-291295) DBG | skip adding static IP to network mk-embed-certs-291295 - found existing host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"}
	I0818 20:07:49.079132   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Getting to WaitForSSH function...
	I0818 20:07:49.079148   73815 main.go:141] libmachine: (embed-certs-291295) Waiting for SSH to be available...
	I0818 20:07:49.081287   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081592   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.081645   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081761   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH client type: external
	I0818 20:07:49.081788   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa (-rw-------)
	I0818 20:07:49.081823   73815 main.go:141] libmachine: (embed-certs-291295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:07:49.081841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | About to run SSH command:
	I0818 20:07:49.081854   73815 main.go:141] libmachine: (embed-certs-291295) DBG | exit 0
	I0818 20:07:49.207649   73815 main.go:141] libmachine: (embed-certs-291295) DBG | SSH cmd err, output: <nil>: 
	I0818 20:07:49.208007   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetConfigRaw
	I0818 20:07:49.208604   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.211088   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211436   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.211464   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211685   73815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:07:49.211906   73815 machine.go:93] provisionDockerMachine start ...
	I0818 20:07:49.211932   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:49.212156   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.214381   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214696   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.214722   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214838   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.215001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215139   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215264   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.215402   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.215637   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.215650   73815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:07:49.327972   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:07:49.328001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328234   73815 buildroot.go:166] provisioning hostname "embed-certs-291295"
	I0818 20:07:49.328286   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328495   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.331272   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331667   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.331695   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331795   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.331967   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332124   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332235   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.332387   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.332602   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.332620   73815 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-291295 && echo "embed-certs-291295" | sudo tee /etc/hostname
	I0818 20:07:49.457656   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-291295
	
	I0818 20:07:49.457692   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.460362   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460692   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.460724   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460821   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.461040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461269   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461419   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.461593   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.461791   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.461807   73815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-291295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-291295/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-291295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:07:49.580418   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:49.580448   73815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:07:49.580487   73815 buildroot.go:174] setting up certificates
	I0818 20:07:49.580501   73815 provision.go:84] configureAuth start
	I0818 20:07:49.580513   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.580787   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.583435   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.583801   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.583825   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.584097   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.586253   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586572   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.586606   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586700   73815 provision.go:143] copyHostCerts
	I0818 20:07:49.586764   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:07:49.586786   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:07:49.586863   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:07:49.586984   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:07:49.586994   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:07:49.587034   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:07:49.587134   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:07:49.587144   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:07:49.587182   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:07:49.587257   73815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-291295 san=[127.0.0.1 192.168.39.125 embed-certs-291295 localhost minikube]
	I0818 20:07:49.844689   73815 provision.go:177] copyRemoteCerts
	I0818 20:07:49.844745   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:07:49.844767   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.847172   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.847517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847700   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.847898   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.848060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.848210   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:49.933798   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:07:49.957958   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:07:49.981551   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:07:50.004238   73815 provision.go:87] duration metric: took 423.726052ms to configureAuth
	I0818 20:07:50.004263   73815 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:07:50.004431   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:07:50.004494   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.006759   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007031   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.007059   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007217   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.007437   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007603   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007729   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.007894   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.008058   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.008072   73815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:07:50.287001   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:07:50.287027   73815 machine.go:96] duration metric: took 1.075103653s to provisionDockerMachine
	I0818 20:07:50.287038   73815 start.go:293] postStartSetup for "embed-certs-291295" (driver="kvm2")
	I0818 20:07:50.287047   73815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:07:50.287067   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.287451   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:07:50.287478   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.290150   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290493   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.290515   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290727   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.290911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.291096   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.291233   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.379621   73815 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:07:50.388749   73815 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:07:50.388772   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:07:50.388844   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:07:50.388927   73815 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:07:50.389046   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:07:50.398957   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:50.422817   73815 start.go:296] duration metric: took 135.767247ms for postStartSetup
	I0818 20:07:50.422859   73815 fix.go:56] duration metric: took 17.598982329s for fixHost
	I0818 20:07:50.422886   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.425514   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.425899   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.425926   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.426113   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.426332   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426505   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426623   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.426798   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.427018   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.427033   73815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:07:50.540087   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011670.500173623
	
	I0818 20:07:50.540113   73815 fix.go:216] guest clock: 1724011670.500173623
	I0818 20:07:50.540122   73815 fix.go:229] Guest: 2024-08-18 20:07:50.500173623 +0000 UTC Remote: 2024-08-18 20:07:50.42286401 +0000 UTC m=+287.764343419 (delta=77.309613ms)
	I0818 20:07:50.540140   73815 fix.go:200] guest clock delta is within tolerance: 77.309613ms
	I0818 20:07:50.540145   73815 start.go:83] releasing machines lock for "embed-certs-291295", held for 17.716293127s
	I0818 20:07:50.540172   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.540462   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:50.543280   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543688   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.543721   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544386   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544639   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544698   73815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:07:50.544749   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.544889   73815 ssh_runner.go:195] Run: cat /version.json
	I0818 20:07:50.544913   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.547481   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547813   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.547841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547867   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547962   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548165   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548281   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.548307   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.548340   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548431   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548515   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.548576   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548701   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548874   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.628660   73815 ssh_runner.go:195] Run: systemctl --version
	I0818 20:07:50.653164   73815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:07:50.799158   73815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:07:50.805063   73815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:07:50.805134   73815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:07:50.820796   73815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:07:50.820822   73815 start.go:495] detecting cgroup driver to use...
	I0818 20:07:50.820901   73815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:07:50.837574   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:07:50.851913   73815 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:07:50.851981   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:07:50.865595   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:07:50.879240   73815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:07:50.990057   73815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:07:51.151540   73815 docker.go:233] disabling docker service ...
	I0818 20:07:51.151618   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:07:51.166231   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:07:51.180949   73815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:07:51.329174   73815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:07:51.460564   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:07:51.474929   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:07:51.494510   73815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:07:51.494573   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.507465   73815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:07:51.507533   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.519207   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.535742   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.551186   73815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:07:51.563233   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.574714   73815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.597948   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.609883   73815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:07:51.621040   73815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:07:51.621115   73815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:07:51.636305   73815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:07:51.646895   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:51.781890   73815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:07:51.927722   73815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:07:51.927799   73815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:07:51.932918   73815 start.go:563] Will wait 60s for crictl version
	I0818 20:07:51.933006   73815 ssh_runner.go:195] Run: which crictl
	I0818 20:07:51.936917   73815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:07:51.981063   73815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:07:51.981141   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.008566   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.041182   73815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:07:52.042348   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:52.045196   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045559   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:52.045588   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045764   73815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 20:07:52.050188   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:52.065105   73815 kubeadm.go:883] updating cluster {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:07:52.065244   73815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:07:52.065300   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:52.108608   73815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:07:52.108687   73815 ssh_runner.go:195] Run: which lz4
	I0818 20:07:52.112897   73815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:07:52.117388   73815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:07:52.117421   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:53.532527   73815 crio.go:462] duration metric: took 1.419668609s to copy over tarball
	I0818 20:07:53.532605   73815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:07:55.664780   73815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132141788s)
	I0818 20:07:55.664810   73815 crio.go:469] duration metric: took 2.132257968s to extract the tarball
	I0818 20:07:55.664820   73815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:07:55.702662   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:55.745782   73815 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:07:55.745801   73815 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:07:55.745809   73815 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0818 20:07:55.745921   73815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-291295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:07:55.745985   73815 ssh_runner.go:195] Run: crio config
	I0818 20:07:55.788458   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:07:55.788484   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:07:55.788503   73815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:07:55.788537   73815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-291295 NodeName:embed-certs-291295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:07:55.788723   73815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-291295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:07:55.788800   73815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:07:55.798787   73815 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:07:55.798860   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:07:55.808532   73815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0818 20:07:55.825731   73815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:07:55.842287   73815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0818 20:07:55.860058   73815 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0818 20:07:55.864007   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:55.876297   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:55.999076   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:07:56.015305   73815 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295 for IP: 192.168.39.125
	I0818 20:07:56.015325   73815 certs.go:194] generating shared ca certs ...
	I0818 20:07:56.015339   73815 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:07:56.015505   73815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:07:56.015548   73815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:07:56.015557   73815 certs.go:256] generating profile certs ...
	I0818 20:07:56.015633   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/client.key
	I0818 20:07:56.015689   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key.a8bddcfe
	I0818 20:07:56.015732   73815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key
	I0818 20:07:56.015846   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:07:56.015885   73815 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:07:56.015898   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:07:56.015953   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:07:56.015979   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:07:56.015999   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:07:56.016036   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:56.016660   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:07:56.044323   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:07:56.079231   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:07:56.111738   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:07:56.134817   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0818 20:07:56.160819   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:07:56.185806   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:07:56.210116   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:07:56.234185   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:07:56.256896   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:07:56.279505   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:07:56.302178   73815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:07:56.318931   73815 ssh_runner.go:195] Run: openssl version
	I0818 20:07:56.324865   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:07:56.336272   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340825   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340872   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.346515   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:07:56.357471   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:07:56.368211   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372600   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372662   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.378152   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:07:56.388868   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:07:56.399297   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403628   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403663   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.409041   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:07:56.419342   73815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:07:56.423757   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:07:56.429341   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:07:56.435012   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:07:56.440752   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:07:56.446305   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:07:56.452219   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:07:56.458004   73815 kubeadm.go:392] StartCluster: {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:07:56.458133   73815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:07:56.458181   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.495200   73815 cri.go:89] found id: ""
	I0818 20:07:56.495281   73815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:07:56.505834   73815 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:07:56.505854   73815 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:07:56.505903   73815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:07:56.516025   73815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:07:56.516962   73815 kubeconfig.go:125] found "embed-certs-291295" server: "https://192.168.39.125:8443"
	I0818 20:07:56.518789   73815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:07:56.528513   73815 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0818 20:07:56.528541   73815 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:07:56.528556   73815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:07:56.528612   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.568091   73815 cri.go:89] found id: ""
	I0818 20:07:56.568161   73815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:07:56.584012   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:07:56.593697   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:07:56.593712   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:07:56.593746   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:07:56.603071   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:07:56.603112   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:07:56.612422   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:07:56.621194   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:07:56.621243   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:07:56.630252   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.640086   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:07:56.640138   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.649323   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:07:56.658055   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:07:56.658110   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:07:56.667134   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:07:56.676460   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.783806   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.515850   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:07:57.710065   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.780213   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.854365   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:07:57.854458   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.355246   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.854602   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.355211   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.854991   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.354593   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.368818   73815 api_server.go:72] duration metric: took 2.514473789s to wait for apiserver process to appear ...
	I0818 20:08:00.368844   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:00.368866   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.832413   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:02.832449   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:02.832466   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.924768   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.924804   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:02.924820   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.929839   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.929869   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.369350   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.373766   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.373796   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.869333   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.874889   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.874919   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:04.369187   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:04.374739   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:08:04.383736   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:04.383764   73815 api_server.go:131] duration metric: took 4.014913233s to wait for apiserver health ...
	I0818 20:08:04.383773   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:08:04.383779   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:04.385486   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:04.386800   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:04.403476   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:04.422354   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:04.435181   73815 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:04.435222   73815 system_pods.go:61] "coredns-6f6b679f8f-wvd9k" [02369649-1565-437d-8b19-a67adfe13d45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:04.435237   73815 system_pods.go:61] "etcd-embed-certs-291295" [1e9f0b7d-bb65-4867-821e-b9af34338b3e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:04.435246   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [bb884a00-e058-4348-bc6a-427c64f4c68d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:04.435261   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [3a359998-cdb6-46ef-a018-e03e70cb33e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:04.435269   73815 system_pods.go:61] "kube-proxy-5fjm2" [bb15b1d9-8221-473a-b0c7-8c65b3b18bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 20:08:04.435276   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [4ed7725a-b0e6-4bc0-b0bd-913eb15fd4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:04.435287   73815 system_pods.go:61] "metrics-server-6867b74b74-g2kt7" [c23cc238-51f0-402c-a0c1-4aecc020d845] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:04.435294   73815 system_pods.go:61] "storage-provisioner" [2dcad3a1-15f0-41b9-8398-5a6e2d8763b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 20:08:04.435303   73815 system_pods.go:74] duration metric: took 12.928394ms to wait for pod list to return data ...
	I0818 20:08:04.435314   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:04.439127   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:04.439150   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:04.439161   73815 node_conditions.go:105] duration metric: took 3.84281ms to run NodePressure ...
	I0818 20:08:04.439176   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:04.720705   73815 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726814   73815 kubeadm.go:739] kubelet initialised
	I0818 20:08:04.726835   73815 kubeadm.go:740] duration metric: took 6.104356ms waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726843   73815 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:04.736000   73815 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.741473   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741509   73815 pod_ready.go:82] duration metric: took 5.472852ms for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.741523   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741534   73815 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.749841   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749872   73815 pod_ready.go:82] duration metric: took 8.326743ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.749883   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749891   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.756947   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.756997   73815 pod_ready.go:82] duration metric: took 7.079861ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.757011   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.757019   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.825829   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825865   73815 pod_ready.go:82] duration metric: took 68.834734ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.825878   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825888   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225761   73815 pod_ready.go:93] pod "kube-proxy-5fjm2" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:05.225786   73815 pod_ready.go:82] duration metric: took 399.888138ms for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225796   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:07.232250   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.744305   74485 start.go:364] duration metric: took 3m27.85511004s to acquireMachinesLock for "default-k8s-diff-port-852598"
	I0818 20:08:10.744365   74485 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:10.744384   74485 fix.go:54] fixHost starting: 
	I0818 20:08:10.744751   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:10.744791   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:10.764317   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0818 20:08:10.764799   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:10.765323   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:08:10.765349   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:10.765723   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:10.765929   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:10.766110   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:08:10.767735   74485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-852598: state=Stopped err=<nil>
	I0818 20:08:10.767763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	W0818 20:08:10.767931   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:10.770197   74485 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-852598" ...
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:09.234086   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:11.732954   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.771461   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Start
	I0818 20:08:10.771638   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring networks are active...
	I0818 20:08:10.772332   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network default is active
	I0818 20:08:10.772645   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network mk-default-k8s-diff-port-852598 is active
	I0818 20:08:10.773119   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Getting domain xml...
	I0818 20:08:10.773840   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Creating domain...
	I0818 20:08:12.058765   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting to get IP...
	I0818 20:08:12.059745   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060236   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.060152   75353 retry.go:31] will retry after 227.793826ms: waiting for machine to come up
	I0818 20:08:12.289622   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290038   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290061   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.290013   75353 retry.go:31] will retry after 288.501286ms: waiting for machine to come up
	I0818 20:08:12.580672   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581158   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.581120   75353 retry.go:31] will retry after 460.489481ms: waiting for machine to come up
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:13.736819   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:14.732761   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:14.732783   73815 pod_ready.go:82] duration metric: took 9.506980075s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:14.732792   73815 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:16.739855   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:13.042839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043444   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043475   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.043413   75353 retry.go:31] will retry after 542.076458ms: waiting for machine to come up
	I0818 20:08:13.586675   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587296   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587326   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.587216   75353 retry.go:31] will retry after 553.588704ms: waiting for machine to come up
	I0818 20:08:14.142076   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142714   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142737   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.142616   75353 retry.go:31] will retry after 852.179264ms: waiting for machine to come up
	I0818 20:08:14.996732   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997226   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997258   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.997175   75353 retry.go:31] will retry after 732.180291ms: waiting for machine to come up
	I0818 20:08:15.731247   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731741   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731771   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:15.731699   75353 retry.go:31] will retry after 1.456328641s: waiting for machine to come up
	I0818 20:08:17.189586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190071   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:17.189997   75353 retry.go:31] will retry after 1.632315907s: waiting for machine to come up
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:18.740885   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:21.239526   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:18.824458   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824827   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824859   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:18.824772   75353 retry.go:31] will retry after 2.077122736s: waiting for machine to come up
	I0818 20:08:20.903734   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904176   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904203   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:20.904139   75353 retry.go:31] will retry after 1.975638775s: waiting for machine to come up
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.239765   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:25.739127   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:22.882020   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882511   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:22.882450   75353 retry.go:31] will retry after 3.362090127s: waiting for machine to come up
	I0818 20:08:26.246148   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246523   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246547   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:26.246479   75353 retry.go:31] will retry after 3.188423251s: waiting for machine to come up
	I0818 20:08:30.732227   73711 start.go:364] duration metric: took 52.90798246s to acquireMachinesLock for "no-preload-944426"
	I0818 20:08:30.732291   73711 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:30.732302   73711 fix.go:54] fixHost starting: 
	I0818 20:08:30.732702   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:30.732738   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:30.749873   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0818 20:08:30.750371   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:30.750922   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:08:30.750951   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:30.751323   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:30.751547   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:30.751748   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:08:30.753437   73711 fix.go:112] recreateIfNeeded on no-preload-944426: state=Stopped err=<nil>
	I0818 20:08:30.753460   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	W0818 20:08:30.753623   73711 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:30.756026   73711 out.go:177] * Restarting existing kvm2 VM for "no-preload-944426" ...
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.438706   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439209   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Found IP for machine: 192.168.72.111
	I0818 20:08:29.439225   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserving static IP address...
	I0818 20:08:29.439241   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has current primary IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439712   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.439740   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | skip adding static IP to network mk-default-k8s-diff-port-852598 - found existing host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"}
	I0818 20:08:29.439754   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserved static IP address: 192.168.72.111
	I0818 20:08:29.439769   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for SSH to be available...
	I0818 20:08:29.439786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Getting to WaitForSSH function...
	I0818 20:08:29.442039   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442351   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.442378   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442515   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH client type: external
	I0818 20:08:29.442545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa (-rw-------)
	I0818 20:08:29.442569   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:29.442580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | About to run SSH command:
	I0818 20:08:29.442592   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | exit 0
	I0818 20:08:29.567586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:29.567935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetConfigRaw
	I0818 20:08:29.568553   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.570763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571150   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.571183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571367   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:08:29.571585   74485 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:29.571608   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:29.571839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.574102   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574560   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.574598   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574753   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.574920   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575219   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.575421   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.575610   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.575623   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:29.683677   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:29.683705   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.683980   74485 buildroot.go:166] provisioning hostname "default-k8s-diff-port-852598"
	I0818 20:08:29.684010   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.684210   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.687062   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687490   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.687518   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687656   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.687817   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.687954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.688105   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.688270   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.688444   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.688457   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-852598 && echo "default-k8s-diff-port-852598" | sudo tee /etc/hostname
	I0818 20:08:29.810790   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-852598
	
	I0818 20:08:29.810821   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.813448   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.813868   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.814159   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814322   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814457   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.814613   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.814821   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.814847   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-852598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-852598/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-852598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:29.934730   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:29.934762   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:29.934818   74485 buildroot.go:174] setting up certificates
	I0818 20:08:29.934834   74485 provision.go:84] configureAuth start
	I0818 20:08:29.934848   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.935133   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.938004   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938365   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.938385   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938612   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.940910   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941267   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.941298   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941376   74485 provision.go:143] copyHostCerts
	I0818 20:08:29.941429   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:29.941446   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:29.941498   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:29.941583   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:29.941591   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:29.941609   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:29.941657   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:29.941664   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:29.941683   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:29.941726   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-852598 san=[127.0.0.1 192.168.72.111 default-k8s-diff-port-852598 localhost minikube]
	I0818 20:08:30.047223   74485 provision.go:177] copyRemoteCerts
	I0818 20:08:30.047284   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:30.047310   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.049891   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050165   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.050195   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050394   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.050580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.050750   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.050910   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.133873   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:30.158887   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0818 20:08:30.183930   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 20:08:30.208851   74485 provision.go:87] duration metric: took 274.002401ms to configureAuth
	I0818 20:08:30.208888   74485 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:30.209075   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:30.209144   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.211913   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212274   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.212305   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212521   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.212718   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.212897   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.213060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.213313   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.213531   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.213564   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:30.490496   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:30.490524   74485 machine.go:96] duration metric: took 918.924484ms to provisionDockerMachine
	I0818 20:08:30.490541   74485 start.go:293] postStartSetup for "default-k8s-diff-port-852598" (driver="kvm2")
	I0818 20:08:30.490555   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:30.490576   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.490879   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:30.490904   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.493538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.493863   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.493894   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.494015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.494211   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.494367   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.494513   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.582020   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:30.586488   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:30.586510   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:30.586568   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:30.586656   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:30.586743   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:30.595907   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:30.619808   74485 start.go:296] duration metric: took 129.254668ms for postStartSetup
	I0818 20:08:30.619842   74485 fix.go:56] duration metric: took 19.875457987s for fixHost
	I0818 20:08:30.619861   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.622487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622802   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.622836   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.623181   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623338   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623489   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.623663   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.623819   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.623829   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:30.732011   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011710.692571104
	
	I0818 20:08:30.732033   74485 fix.go:216] guest clock: 1724011710.692571104
	I0818 20:08:30.732040   74485 fix.go:229] Guest: 2024-08-18 20:08:30.692571104 +0000 UTC Remote: 2024-08-18 20:08:30.619845545 +0000 UTC m=+227.865652589 (delta=72.725559ms)
	I0818 20:08:30.732088   74485 fix.go:200] guest clock delta is within tolerance: 72.725559ms
	I0818 20:08:30.732098   74485 start.go:83] releasing machines lock for "default-k8s-diff-port-852598", held for 19.987759602s
	I0818 20:08:30.732126   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.732380   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:30.735249   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735696   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.735724   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735987   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736665   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736886   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736961   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:30.737002   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.737212   74485 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:30.737240   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.740016   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740246   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740447   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740470   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740646   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740650   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740739   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740949   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740956   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741415   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741427   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741608   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.741699   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.821128   74485 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:30.848919   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:30.997885   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:31.004578   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:31.004656   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:31.023770   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:31.023801   74485 start.go:495] detecting cgroup driver to use...
	I0818 20:08:31.023873   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:31.040507   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:31.054848   74485 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:31.054901   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:31.069584   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:31.089532   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:31.214560   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:31.394507   74485 docker.go:233] disabling docker service ...
	I0818 20:08:31.394571   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:31.411295   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:31.427312   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:31.547148   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:31.669942   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:31.686214   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:31.711412   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:31.711474   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.723281   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:31.723346   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.735488   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.748029   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.762456   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:31.779045   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.793816   74485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.816892   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.829236   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:31.842943   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:31.843000   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:31.858422   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:31.870179   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:32.003783   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:32.160300   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:32.160368   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:32.165424   74485 start.go:563] Will wait 60s for crictl version
	I0818 20:08:32.165472   74485 ssh_runner.go:195] Run: which crictl
	I0818 20:08:32.169268   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:32.211667   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:32.211758   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.242366   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.272343   74485 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:27.739698   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:30.239242   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.240089   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.273652   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:32.277017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277362   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:32.277395   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277654   74485 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:32.282225   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:32.306870   74485 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:32.306980   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:32.307040   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:32.350393   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:32.350473   74485 ssh_runner.go:195] Run: which lz4
	I0818 20:08:32.355129   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:32.359816   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:32.359839   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:08:30.757329   73711 main.go:141] libmachine: (no-preload-944426) Calling .Start
	I0818 20:08:30.757514   73711 main.go:141] libmachine: (no-preload-944426) Ensuring networks are active...
	I0818 20:08:30.758286   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network default is active
	I0818 20:08:30.758667   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network mk-no-preload-944426 is active
	I0818 20:08:30.759084   73711 main.go:141] libmachine: (no-preload-944426) Getting domain xml...
	I0818 20:08:30.759889   73711 main.go:141] libmachine: (no-preload-944426) Creating domain...
	I0818 20:08:32.064235   73711 main.go:141] libmachine: (no-preload-944426) Waiting to get IP...
	I0818 20:08:32.065149   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.065617   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.065693   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.065614   75550 retry.go:31] will retry after 223.046315ms: waiting for machine to come up
	I0818 20:08:32.290000   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.290486   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.290517   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.290460   75550 retry.go:31] will retry after 359.595476ms: waiting for machine to come up
	I0818 20:08:32.652293   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.652922   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.652953   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.652891   75550 retry.go:31] will retry after 355.131428ms: waiting for machine to come up
	I0818 20:08:33.009174   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.009664   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.009692   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.009620   75550 retry.go:31] will retry after 433.765107ms: waiting for machine to come up
	I0818 20:08:33.445297   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.446028   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.446057   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.446005   75550 retry.go:31] will retry after 547.853366ms: waiting for machine to come up
	I0818 20:08:33.995808   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.996537   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.996569   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.996500   75550 retry.go:31] will retry after 830.882652ms: waiting for machine to come up
	I0818 20:08:34.828636   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:34.829139   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:34.829169   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:34.829088   75550 retry.go:31] will retry after 1.034176215s: waiting for machine to come up
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.240826   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:36.740440   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:33.831827   74485 crio.go:462] duration metric: took 1.476738272s to copy over tarball
	I0818 20:08:33.831892   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:36.080107   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.24818669s)
	I0818 20:08:36.080141   74485 crio.go:469] duration metric: took 2.248285769s to extract the tarball
	I0818 20:08:36.080159   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:36.120912   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:36.170431   74485 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:08:36.170455   74485 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:08:36.170463   74485 kubeadm.go:934] updating node { 192.168.72.111 8444 v1.31.0 crio true true} ...
	I0818 20:08:36.170563   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-852598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:36.170628   74485 ssh_runner.go:195] Run: crio config
	I0818 20:08:36.215464   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:36.215491   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:36.215504   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:36.215528   74485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-852598 NodeName:default-k8s-diff-port-852598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:08:36.215652   74485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-852598"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:36.215718   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:08:36.227163   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:36.227254   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:36.237577   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0818 20:08:36.254898   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:36.273530   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0818 20:08:36.290824   74485 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:36.294542   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:36.306822   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:36.443673   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:36.461205   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598 for IP: 192.168.72.111
	I0818 20:08:36.461232   74485 certs.go:194] generating shared ca certs ...
	I0818 20:08:36.461252   74485 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:36.461420   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:36.461492   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:36.461505   74485 certs.go:256] generating profile certs ...
	I0818 20:08:36.461621   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/client.key
	I0818 20:08:36.461717   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key.44a0f5ad
	I0818 20:08:36.461783   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key
	I0818 20:08:36.461930   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:36.461983   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:36.461998   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:36.462026   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:36.462077   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:36.462112   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:36.462167   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:36.462916   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:36.512610   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:36.558616   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:36.595755   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:36.638264   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 20:08:36.669336   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:08:36.692480   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:36.717235   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:08:36.742220   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:36.765505   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:36.789279   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:36.813777   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:36.831256   74485 ssh_runner.go:195] Run: openssl version
	I0818 20:08:36.837184   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:36.848123   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853030   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853089   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.859016   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:36.871084   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:36.882581   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.888943   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.889008   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.896841   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:36.911762   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:36.923029   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.927982   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.928039   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.934165   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:36.946794   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:36.951686   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:36.957905   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:36.964071   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:36.970369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:36.976369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:36.982386   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:36.988286   74485 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:36.988382   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:36.988433   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.036383   74485 cri.go:89] found id: ""
	I0818 20:08:37.036472   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:37.047135   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:37.047159   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:37.047204   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:37.058133   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:37.059236   74485 kubeconfig.go:125] found "default-k8s-diff-port-852598" server: "https://192.168.72.111:8444"
	I0818 20:08:37.061368   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:37.072922   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0818 20:08:37.072961   74485 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:37.072975   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:37.073035   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.120622   74485 cri.go:89] found id: ""
	I0818 20:08:37.120713   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:37.138564   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:37.149091   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:37.149114   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:37.149167   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:08:37.160298   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:37.160364   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:37.170717   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:08:37.180261   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:37.180337   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:37.190466   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.200331   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:37.200407   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.210729   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:08:37.220302   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:37.220379   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:37.230616   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:37.241303   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:37.365964   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:35.865644   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:35.866148   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:35.866176   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:35.866094   75550 retry.go:31] will retry after 1.30047863s: waiting for machine to come up
	I0818 20:08:37.168446   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:37.168947   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:37.168985   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:37.168886   75550 retry.go:31] will retry after 1.143148547s: waiting for machine to come up
	I0818 20:08:38.314142   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:38.314622   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:38.314645   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:38.314568   75550 retry.go:31] will retry after 2.106630797s: waiting for machine to come up
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.240817   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:41.741780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:38.322305   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.523945   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.627637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.794218   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:38.794298   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.295075   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.795095   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.810749   74485 api_server.go:72] duration metric: took 1.016560665s to wait for apiserver process to appear ...
	I0818 20:08:39.810778   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:39.810802   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:39.811324   74485 api_server.go:269] stopped: https://192.168.72.111:8444/healthz: Get "https://192.168.72.111:8444/healthz": dial tcp 192.168.72.111:8444: connect: connection refused
	I0818 20:08:40.311081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.309160   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:42.309190   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:42.309206   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.364083   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.364123   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:42.364148   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.370890   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.370918   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:40.423364   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:40.423886   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:40.423909   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:40.423851   75550 retry.go:31] will retry after 2.350918177s: waiting for machine to come up
	I0818 20:08:42.776801   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:42.777407   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:42.777440   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:42.777361   75550 retry.go:31] will retry after 3.529824243s: waiting for machine to come up
	I0818 20:08:42.815322   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.823702   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.823738   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.311540   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.317503   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.317537   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.810955   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.816976   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.817005   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.311718   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.316009   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.316038   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.811634   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.816069   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.816095   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.311732   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.317099   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:45.317122   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.811063   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.815319   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:08:45.821699   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:45.821728   74485 api_server.go:131] duration metric: took 6.010942001s to wait for apiserver health ...
	I0818 20:08:45.821739   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:45.821774   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:45.823968   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.239827   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:46.240539   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:45.825235   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:45.836398   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:45.854746   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:45.866305   74485 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:45.866335   74485 system_pods.go:61] "coredns-6f6b679f8f-zfdn9" [8ed412a0-912d-4619-a2d8-2378f921037b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:45.866344   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [efa18356-f8dd-4fe4-acc6-59f859e7becf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:45.866351   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [b92f2056-c5b6-4a2f-8519-a83b2350866f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:45.866359   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [7eb6a474-891d-442e-bd85-4ca766312f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:45.866365   74485 system_pods.go:61] "kube-proxy-h8bpj" [472e231d-df71-44d6-8873-23d7e43d43d2] Running
	I0818 20:08:45.866375   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [43dccb14-0125-4d48-9537-8a87c865b586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:45.866381   74485 system_pods.go:61] "metrics-server-6867b74b74-brqj6" [de1c0894-2b42-4728-bf63-bea36c5aa0d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:45.866387   74485 system_pods.go:61] "storage-provisioner" [41499d9e-d3cf-4dbc-9464-998a1f2c6186] Running
	I0818 20:08:45.866395   74485 system_pods.go:74] duration metric: took 11.62616ms to wait for pod list to return data ...
	I0818 20:08:45.866411   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:45.870540   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:45.870564   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:45.870578   74485 node_conditions.go:105] duration metric: took 4.15805ms to run NodePressure ...
	I0818 20:08:45.870597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:46.138555   74485 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142738   74485 kubeadm.go:739] kubelet initialised
	I0818 20:08:46.142758   74485 kubeadm.go:740] duration metric: took 4.173219ms waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142765   74485 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:46.147199   74485 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.151726   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151751   74485 pod_ready.go:82] duration metric: took 4.528706ms for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.151762   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151770   74485 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.155962   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.155984   74485 pod_ready.go:82] duration metric: took 4.203038ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.155996   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.156002   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.159739   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159759   74485 pod_ready.go:82] duration metric: took 3.749616ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.159769   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159777   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.309056   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:46.309441   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:46.309470   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:46.309395   75550 retry.go:31] will retry after 3.741295193s: waiting for machine to come up
	I0818 20:08:50.052617   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053049   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has current primary IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053070   73711 main.go:141] libmachine: (no-preload-944426) Found IP for machine: 192.168.61.228
	I0818 20:08:50.053083   73711 main.go:141] libmachine: (no-preload-944426) Reserving static IP address...
	I0818 20:08:50.053446   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.053467   73711 main.go:141] libmachine: (no-preload-944426) Reserved static IP address: 192.168.61.228
	I0818 20:08:50.053484   73711 main.go:141] libmachine: (no-preload-944426) DBG | skip adding static IP to network mk-no-preload-944426 - found existing host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"}
	I0818 20:08:50.053498   73711 main.go:141] libmachine: (no-preload-944426) DBG | Getting to WaitForSSH function...
	I0818 20:08:50.053510   73711 main.go:141] libmachine: (no-preload-944426) Waiting for SSH to be available...
	I0818 20:08:50.055459   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055790   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.055822   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055911   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH client type: external
	I0818 20:08:50.055939   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa (-rw-------)
	I0818 20:08:50.055971   73711 main.go:141] libmachine: (no-preload-944426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:50.055986   73711 main.go:141] libmachine: (no-preload-944426) DBG | About to run SSH command:
	I0818 20:08:50.055998   73711 main.go:141] libmachine: (no-preload-944426) DBG | exit 0
	I0818 20:08:50.175717   73711 main.go:141] libmachine: (no-preload-944426) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:50.176077   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetConfigRaw
	I0818 20:08:50.176705   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.179072   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179455   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.179486   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179712   73711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:08:50.179900   73711 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:50.179923   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:50.180128   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.182300   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182679   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.182707   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182822   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.183009   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183138   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183292   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.183455   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.183613   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.183623   73711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.739015   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.741282   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:48.165270   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.166500   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:52.667585   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.284037   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:50.284069   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284354   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:08:50.284383   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284503   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.287412   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287774   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.287814   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287965   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.288164   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288352   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288509   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.288669   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.288869   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.288889   73711 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944426 && echo "no-preload-944426" | sudo tee /etc/hostname
	I0818 20:08:50.407844   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944426
	
	I0818 20:08:50.407877   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.410740   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411115   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.411156   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411402   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.411612   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411760   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411869   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.412073   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.412277   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.412299   73711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944426/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:50.521359   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:50.521388   73711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:50.521456   73711 buildroot.go:174] setting up certificates
	I0818 20:08:50.521467   73711 provision.go:84] configureAuth start
	I0818 20:08:50.521481   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.521824   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.524572   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.524975   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.525002   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.525211   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.527350   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527669   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.527697   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527790   73711 provision.go:143] copyHostCerts
	I0818 20:08:50.527856   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:50.527872   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:50.527924   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:50.528038   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:50.528047   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:50.528065   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:50.528119   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:50.528126   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:50.528143   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:50.528192   73711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.no-preload-944426 san=[127.0.0.1 192.168.61.228 localhost minikube no-preload-944426]
	I0818 20:08:50.740892   73711 provision.go:177] copyRemoteCerts
	I0818 20:08:50.740964   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:50.740991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.743676   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744029   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.744059   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744260   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.744494   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.744681   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.744848   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:50.826364   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:50.858459   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:08:50.890910   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:50.918703   73711 provision.go:87] duration metric: took 397.222917ms to configureAuth
	I0818 20:08:50.918730   73711 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:50.918947   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:50.919029   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.922219   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922549   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.922573   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922762   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.922991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923166   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923300   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.923475   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.923683   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.923700   73711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:51.193561   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:51.193588   73711 machine.go:96] duration metric: took 1.013672792s to provisionDockerMachine
	I0818 20:08:51.193603   73711 start.go:293] postStartSetup for "no-preload-944426" (driver="kvm2")
	I0818 20:08:51.193616   73711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:51.193660   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.194032   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:51.194060   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.196422   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196712   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.196747   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196900   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.197046   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.197157   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.197325   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.279007   73711 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:51.283324   73711 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:51.283344   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:51.283424   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:51.283524   73711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:51.283641   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:51.293489   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:51.317415   73711 start.go:296] duration metric: took 123.797891ms for postStartSetup
	I0818 20:08:51.317455   73711 fix.go:56] duration metric: took 20.58515233s for fixHost
	I0818 20:08:51.317479   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.320161   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320452   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.320481   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320667   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.320853   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321027   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321171   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.321322   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:51.321505   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:51.321517   73711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:51.420193   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011731.395088538
	
	I0818 20:08:51.420216   73711 fix.go:216] guest clock: 1724011731.395088538
	I0818 20:08:51.420223   73711 fix.go:229] Guest: 2024-08-18 20:08:51.395088538 +0000 UTC Remote: 2024-08-18 20:08:51.317459873 +0000 UTC m=+356.082724848 (delta=77.628665ms)
	I0818 20:08:51.420240   73711 fix.go:200] guest clock delta is within tolerance: 77.628665ms
	I0818 20:08:51.420256   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 20.687989837s
	I0818 20:08:51.420273   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.420534   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:51.423567   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.423861   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.423888   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.424052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424528   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424690   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424777   73711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:51.424825   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.424916   73711 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:51.424945   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.427482   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427714   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427786   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.427813   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427962   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428080   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.428109   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.428146   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428283   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428342   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428441   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428532   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.428600   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428707   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.528038   73711 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:51.534231   73711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:51.683823   73711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:51.690823   73711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:51.690901   73711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:51.707356   73711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:51.707389   73711 start.go:495] detecting cgroup driver to use...
	I0818 20:08:51.707459   73711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:51.723884   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:51.737661   73711 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:51.737715   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:51.751187   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:51.764367   73711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:51.881664   73711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:52.022183   73711 docker.go:233] disabling docker service ...
	I0818 20:08:52.022250   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:52.037108   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:52.050404   73711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:52.190167   73711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:52.325569   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:52.339546   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:52.358427   73711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:52.358487   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.369570   73711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:52.369629   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.382786   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.396845   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.407797   73711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:52.418649   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.428822   73711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.445799   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.455730   73711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:52.464898   73711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:52.464951   73711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:52.477249   73711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:52.487204   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:52.608922   73711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:52.753849   73711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:52.753918   73711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:52.759116   73711 start.go:563] Will wait 60s for crictl version
	I0818 20:08:52.759175   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:52.763674   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:52.806016   73711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:52.806106   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.833670   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.864310   73711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:52.865447   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:52.868265   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868667   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:52.868699   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868900   73711 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:52.873656   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:52.887328   73711 kubeadm.go:883] updating cluster {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:52.887505   73711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:52.887553   73711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:52.923999   73711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:52.924025   73711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:52.924090   73711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.924097   73711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.924113   73711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.924147   73711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.924216   73711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.924239   73711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:52.924305   73711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.924390   73711 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.925984   73711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.926002   73711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.925994   73711 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 20:08:52.926011   73711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.926053   73711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.926291   73711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.117679   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157566   73711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0818 20:08:53.157608   73711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157655   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.158464   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.161938   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217317   73711 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0818 20:08:53.217374   73711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.217419   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217427   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.229954   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0818 20:08:53.253154   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.253209   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.261450   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.269598   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.270354   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.270401   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.421994   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 20:08:53.422048   73711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0818 20:08:53.422139   73711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.422182   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.422195   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.422052   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.446061   73711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0818 20:08:53.446101   73711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.446100   73711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0818 20:08:53.446114   73711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0818 20:08:53.446158   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446201   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.446161   73711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.446130   73711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.446250   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446280   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.474921   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.474936   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0818 20:08:53.474953   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474995   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474999   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.505782   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.505904   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.505934   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.799739   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.740857   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.167350   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.168652   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.666744   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.666779   74485 pod_ready.go:82] duration metric: took 11.506987195s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.666802   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671280   74485 pod_ready.go:93] pod "kube-proxy-h8bpj" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.671302   74485 pod_ready.go:82] duration metric: took 4.49242ms for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671311   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675745   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.675765   74485 pod_ready.go:82] duration metric: took 4.446707ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675779   74485 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:55.497054   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.022032642s)
	I0818 20:08:55.497090   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0818 20:08:55.497116   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.022155942s)
	I0818 20:08:55.497157   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.022131358s)
	I0818 20:08:55.497168   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0818 20:08:55.497227   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.497273   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.497313   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.991355489s)
	I0818 20:08:55.497274   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.991406662s)
	I0818 20:08:55.497362   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.497369   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.497393   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (1.991466215s)
	I0818 20:08:55.497409   73711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.697646009s)
	I0818 20:08:55.497439   73711 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0818 20:08:55.497455   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:55.497468   73711 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.497504   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:55.590490   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.608567   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.608583   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.608658   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 20:08:55.608707   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.608728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0818 20:08:55.608741   73711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.608756   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:55.608768   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.660747   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 20:08:55.660856   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:08:55.701347   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 20:08:55.701376   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.701433   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:08:55.717056   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 20:08:55.717159   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:08:59.680640   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.071854332s)
	I0818 20:08:59.680673   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0818 20:08:59.680700   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (4.071919945s)
	I0818 20:08:59.680728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0818 20:08:59.680739   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680755   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (4.019877135s)
	I0818 20:08:59.680781   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0818 20:08:59.680792   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.97939667s)
	I0818 20:08:59.680802   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680818   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (3.979373996s)
	I0818 20:08:59.680833   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0818 20:08:59.680847   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:59.680876   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (3.96370085s)
	I0818 20:08:59.680895   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.241463   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:00.241492   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:59.683057   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:02.183113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:01.753708   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.072881673s)
	I0818 20:09:01.753739   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.072859667s)
	I0818 20:09:01.753786   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 20:09:01.753747   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0818 20:09:01.753866   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:01.753870   73711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:01.753922   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:03.515107   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.761161853s)
	I0818 20:09:03.515136   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0818 20:09:03.515142   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.761255334s)
	I0818 20:09:03.515162   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:03.515170   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0818 20:09:03.515223   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.741235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.910002   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.239901   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.682962   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.183678   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:05.463531   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.948279133s)
	I0818 20:09:05.463559   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0818 20:09:05.463585   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:05.463629   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:07.525332   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.061676855s)
	I0818 20:09:07.525365   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0818 20:09:07.525401   73711 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:07.525473   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:08.178855   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 20:09:08.178894   73711 cache_images.go:123] Successfully loaded all cached images
	I0818 20:09:08.178900   73711 cache_images.go:92] duration metric: took 15.254860831s to LoadCachedImages
	I0818 20:09:08.178915   73711 kubeadm.go:934] updating node { 192.168.61.228 8443 v1.31.0 crio true true} ...
	I0818 20:09:08.179070   73711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:09:08.179163   73711 ssh_runner.go:195] Run: crio config
	I0818 20:09:08.229392   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:08.229418   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:08.229429   73711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:09:08.229453   73711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.228 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944426 NodeName:no-preload-944426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:09:08.229598   73711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944426"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:09:08.229657   73711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:09:08.240023   73711 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:09:08.240121   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:09:08.249808   73711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 20:09:08.266663   73711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:09:08.284042   73711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0818 20:09:08.302210   73711 ssh_runner.go:195] Run: grep 192.168.61.228	control-plane.minikube.internal$ /etc/hosts
	I0818 20:09:08.306321   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:09:08.318674   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:08.437701   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:08.462861   73711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426 for IP: 192.168.61.228
	I0818 20:09:08.462889   73711 certs.go:194] generating shared ca certs ...
	I0818 20:09:08.462909   73711 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:08.463099   73711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:09:08.463166   73711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:09:08.463178   73711 certs.go:256] generating profile certs ...
	I0818 20:09:08.463297   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/client.key
	I0818 20:09:08.463400   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key.ec9e396f
	I0818 20:09:08.463459   73711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key
	I0818 20:09:08.463622   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:09:08.463663   73711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:09:08.463676   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:09:08.463718   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:09:08.463748   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:09:08.463780   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:09:08.463827   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:09:08.464500   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:09:08.497860   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:09:08.550536   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:09:08.593972   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:09:08.625691   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 20:09:08.652285   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:09:08.676175   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:09:08.703870   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:09:08.729102   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:09:08.758017   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:09:08.783528   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:09:08.808211   73711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:09:08.825465   73711 ssh_runner.go:195] Run: openssl version
	I0818 20:09:08.831856   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:09:08.843336   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847774   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847824   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.854110   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:09:08.865279   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:09:08.876107   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880723   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880786   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.886526   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:09:08.898139   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:09:08.909258   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.913957   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.914015   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.919888   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:09:08.933118   73711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:09:08.937979   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:09:08.944427   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:09:08.950686   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:09:08.956949   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:09:08.963201   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:09:08.969284   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:09:08.975411   73711 kubeadm.go:392] StartCluster: {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:09:08.975501   73711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:09:08.975543   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.019794   73711 cri.go:89] found id: ""
	I0818 20:09:09.019859   73711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:09:09.030614   73711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:09:09.030635   73711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:09:09.030689   73711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:09:09.041513   73711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:09:09.042532   73711 kubeconfig.go:125] found "no-preload-944426" server: "https://192.168.61.228:8443"
	I0818 20:09:09.044606   73711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:09:09.054823   73711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.228
	I0818 20:09:09.054855   73711 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:09:09.054867   73711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:09:09.054919   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.096324   73711 cri.go:89] found id: ""
	I0818 20:09:09.096412   73711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:09:09.112752   73711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:09:09.122515   73711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:09:09.122537   73711 kubeadm.go:157] found existing configuration files:
	
	I0818 20:09:09.122578   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:09:09.131551   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:09:09.131604   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:09:09.140888   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:09:09.149865   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:09:09.149920   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:09:09.159008   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.168220   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:09:09.168279   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.177638   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:09:09.187508   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:09:09.187567   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:09:09.196657   73711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:09:09.206117   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:09.331465   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.242594   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.738983   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:09.682305   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.683106   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:10.574796   73711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243293266s)
	I0818 20:09:10.574822   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.778850   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.843088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.931752   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:09:10.931846   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.432245   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.932577   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.948423   73711 api_server.go:72] duration metric: took 1.016687944s to wait for apiserver process to appear ...
	I0818 20:09:11.948449   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:09:11.948477   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:11.948946   73711 api_server.go:269] stopped: https://192.168.61.228:8443/healthz: Get "https://192.168.61.228:8443/healthz": dial tcp 192.168.61.228:8443: connect: connection refused
	I0818 20:09:12.448725   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.739963   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.739993   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.740010   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.750388   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.750411   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.948679   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.956174   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:14.956205   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.449273   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.453840   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.453870   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:15.949138   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.958790   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.958813   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:16.449521   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:16.453975   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:09:16.460298   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:09:16.460323   73711 api_server.go:131] duration metric: took 4.511867816s to wait for apiserver health ...
	I0818 20:09:16.460330   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:16.460339   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:16.462141   73711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:09:13.740020   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.238126   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:13.683910   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.182408   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.463457   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:09:16.474867   73711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:09:16.494479   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:09:16.502870   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:09:16.502898   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:09:16.502906   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:09:16.502917   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:09:16.502926   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:09:16.502937   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:09:16.502951   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:09:16.502959   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:09:16.502964   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:09:16.502970   73711 system_pods.go:74] duration metric: took 8.468743ms to wait for pod list to return data ...
	I0818 20:09:16.502977   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:09:16.507863   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:09:16.507884   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:09:16.507893   73711 node_conditions.go:105] duration metric: took 4.912203ms to run NodePressure ...
	I0818 20:09:16.507907   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:16.779765   73711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790746   73711 kubeadm.go:739] kubelet initialised
	I0818 20:09:16.790771   73711 kubeadm.go:740] duration metric: took 10.982299ms waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790780   73711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:16.799544   73711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.806805   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806826   73711 pod_ready.go:82] duration metric: took 7.251632ms for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.806835   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806841   73711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.813614   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813646   73711 pod_ready.go:82] duration metric: took 6.794013ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.813656   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813664   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.818982   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819016   73711 pod_ready.go:82] duration metric: took 5.338981ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.819028   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819037   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.898401   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898433   73711 pod_ready.go:82] duration metric: took 79.37927ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.898446   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898454   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.297663   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297697   73711 pod_ready.go:82] duration metric: took 399.23365ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.297706   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297712   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.697884   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697909   73711 pod_ready.go:82] duration metric: took 400.191092ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.697919   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697925   73711 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:18.099008   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099034   73711 pod_ready.go:82] duration metric: took 401.09908ms for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:18.099044   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099050   73711 pod_ready.go:39] duration metric: took 1.30825923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:18.099071   73711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:09:18.111862   73711 ops.go:34] apiserver oom_adj: -16
	I0818 20:09:18.111888   73711 kubeadm.go:597] duration metric: took 9.081245207s to restartPrimaryControlPlane
	I0818 20:09:18.111901   73711 kubeadm.go:394] duration metric: took 9.136525478s to StartCluster
	I0818 20:09:18.111931   73711 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.112017   73711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:09:18.114460   73711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.114771   73711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:09:18.114885   73711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:09:18.114987   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:09:18.115022   73711 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944426"
	I0818 20:09:18.115036   73711 addons.go:69] Setting default-storageclass=true in profile "no-preload-944426"
	I0818 20:09:18.115059   73711 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944426"
	I0818 20:09:18.115075   73711 addons.go:69] Setting metrics-server=true in profile "no-preload-944426"
	W0818 20:09:18.115082   73711 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:09:18.115095   73711 addons.go:234] Setting addon metrics-server=true in "no-preload-944426"
	I0818 20:09:18.115067   73711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944426"
	W0818 20:09:18.115104   73711 addons.go:243] addon metrics-server should already be in state true
	I0818 20:09:18.115122   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115132   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115517   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115530   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115541   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115553   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115560   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115592   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.117511   73711 out.go:177] * Verifying Kubernetes components...
	I0818 20:09:18.118740   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:18.133596   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0818 20:09:18.134093   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.134661   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.134685   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.135066   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.135263   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.136138   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0818 20:09:18.136520   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.136981   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.137004   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.137353   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.137911   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.137957   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.138952   73711 addons.go:234] Setting addon default-storageclass=true in "no-preload-944426"
	W0818 20:09:18.138975   73711 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:09:18.139001   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.139356   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.139413   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.155618   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0818 20:09:18.156076   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.156666   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.156687   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.157086   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.157669   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.157700   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.158080   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0818 20:09:18.158422   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.158850   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.158868   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.158888   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0818 20:09:18.159237   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.159282   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.159455   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.159741   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.159763   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.160108   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.160582   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.160606   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.165108   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.166977   73711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:09:18.168139   73711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.168156   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:09:18.168174   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.171426   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172004   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.172041   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172082   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.172238   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.172336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.172423   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.175961   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0818 20:09:18.176421   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.176543   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0818 20:09:18.176861   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.176875   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.177065   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.177176   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.177345   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.177745   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.177762   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.178162   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.178336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.179445   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180238   73711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.180253   73711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:09:18.180275   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.181198   73711 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:09:18.182420   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:09:18.182447   73711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:09:18.182464   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.183457   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183499   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.183513   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183656   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.183820   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.183953   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.184112   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.185260   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185575   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.185588   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185754   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.185879   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.186013   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.186099   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.338778   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:18.356229   73711 node_ready.go:35] waiting up to 6m0s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:18.496927   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:09:18.496949   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:09:18.513205   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.540482   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:09:18.540505   73711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:09:18.544078   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.613315   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:18.613340   73711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:09:18.668416   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:19.638171   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094064475s)
	I0818 20:09:19.638274   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638299   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638177   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124933278s)
	I0818 20:09:19.638328   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638343   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638281   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638412   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638697   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638714   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638724   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638732   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638825   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638845   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638853   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638932   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638946   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638966   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638994   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639006   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638893   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.639016   73711 addons.go:475] Verifying addon metrics-server=true in "no-preload-944426"
	I0818 20:09:19.639024   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.639227   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.639401   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639416   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.640889   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.640905   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.640973   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647148   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.647169   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.647416   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.647460   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647448   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.649397   73711 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0818 20:09:19.650643   73711 addons.go:510] duration metric: took 1.535758897s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:18.240003   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.738804   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:18.184836   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.682274   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.360319   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:22.860498   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:22.739478   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.238989   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.239357   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:23.181992   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.182417   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.183071   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.360413   73711 node_ready.go:49] node "no-preload-944426" has status "Ready":"True"
	I0818 20:09:25.360449   73711 node_ready.go:38] duration metric: took 7.004187421s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:25.360462   73711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:25.366498   73711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:27.373766   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.873098   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:29.738977   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.238945   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.683136   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.182825   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:31.873262   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.372828   73711 pod_ready.go:93] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.372849   73711 pod_ready.go:82] duration metric: took 7.006326702s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.372858   73711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376709   73711 pod_ready.go:93] pod "etcd-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.376732   73711 pod_ready.go:82] duration metric: took 3.867173ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376743   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380703   73711 pod_ready.go:93] pod "kube-apiserver-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.380722   73711 pod_ready.go:82] duration metric: took 3.970732ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380733   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385137   73711 pod_ready.go:93] pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.385159   73711 pod_ready.go:82] duration metric: took 4.417483ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385171   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390646   73711 pod_ready.go:93] pod "kube-proxy-2l6g8" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.390702   73711 pod_ready.go:82] duration metric: took 5.522399ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390713   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772352   73711 pod_ready.go:93] pod "kube-scheduler-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.772374   73711 pod_ready.go:82] duration metric: took 381.654122ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772384   73711 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:34.779615   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:34.239095   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.738310   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:34.683291   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:37.181676   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.780369   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.278364   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:38.740139   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.238400   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.181867   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.182275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.781697   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:43.239274   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.739280   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.182744   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.684210   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:46.278261   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.279139   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:47.739502   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.239148   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.181581   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.181939   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.182295   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.779145   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.779195   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.779440   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:52.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:55.239445   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.682549   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.182480   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.278685   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.279456   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.739205   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.739780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:02.240023   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.182638   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.182974   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.778759   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:04.279122   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.240223   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.738829   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:03.183498   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:05.681527   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.682456   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.281289   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:08.778838   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:09.238297   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:11.240258   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.182214   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:12.182485   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.778909   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.279849   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:13.739554   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.240247   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:14.182841   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.682427   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:15.778555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:17.779315   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:18.739993   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.241320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:19.182059   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.681310   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:20.278905   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.779594   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:23.739354   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.739406   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:23.682161   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.683137   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.279240   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:27.778736   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.238417   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.240302   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:28.182371   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.182431   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.182538   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.277705   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.778467   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.739086   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:35.239575   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.682089   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:36.682424   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.277613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:39.778047   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:37.743430   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.240404   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:38.682667   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.182697   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.779091   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.780368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:42.739140   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:44.739349   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.739985   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.681897   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:45.682347   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.682385   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.278368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:48.777847   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:49.238888   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:51.239699   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.182672   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.683492   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.778683   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.778843   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:53.240375   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.738235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.181390   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.181940   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.278430   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.278961   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.778449   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:57.740046   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:00.239797   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.182402   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.182516   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.779677   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.277808   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.740702   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:07.239660   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:03.682270   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.682964   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:06.278085   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.781247   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:09.240113   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.739320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.182148   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:10.681873   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:12.682490   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.278330   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:13.278868   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:14.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:16.739103   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.688049   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.779050   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.779324   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:19.238499   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:21.239793   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.181477   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.181514   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.278360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.779285   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:23.739816   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.243360   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:24.182434   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.182980   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:25.277558   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.278088   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:29.278655   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:28.739673   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.238716   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:28.681620   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:30.682755   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:32.682969   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.778900   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.240237   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.739311   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.182357   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:37.682275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.278357   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:38.279370   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:38.239229   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.240416   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:39.682587   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.187237   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.779225   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.779359   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.779661   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:42.739642   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.240529   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.681898   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:46.683048   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:47.283360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.780428   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:47.739118   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.740073   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.238919   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.182997   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.682384   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.279233   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:54.779470   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.239638   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:56.240123   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:53.682822   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:55.683248   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.279035   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:59.779371   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:58.240182   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.240387   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:58.182085   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.682855   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:02.278506   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:04.279952   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:02.739623   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.239503   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.240563   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:03.182703   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.183099   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.682942   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:06.780051   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:09.283183   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:09.738372   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.738978   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:10.183297   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:12.682617   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.778895   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.779613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:14.239433   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:14.733714   73815 pod_ready.go:82] duration metric: took 4m0.000909376s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:14.733756   73815 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:14.733773   73815 pod_ready.go:39] duration metric: took 4m10.006922238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:14.733798   73815 kubeadm.go:597] duration metric: took 4m18.227938977s to restartPrimaryControlPlane
	W0818 20:12:14.733854   73815 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:14.733884   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:15.182539   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:17.682113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.278810   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:18.279513   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:19.682712   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.182462   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:20.777741   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.779468   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:24.785394   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:24.682001   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.182491   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.278406   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.279272   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.682104   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:32.181795   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:31.779163   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:33.782706   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:34.183088   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.682409   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.278136   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:38.278938   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.943045   73815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.209137834s)
	I0818 20:12:40.943131   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:40.961902   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:40.984956   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:41.000828   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:41.000855   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:41.000908   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:41.019730   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:41.019782   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:41.031694   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:41.052082   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:41.052133   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:41.061682   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.070983   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:41.071036   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.083122   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:41.092977   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:41.093041   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:41.103081   73815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:41.155300   73815 kubeadm.go:310] W0818 20:12:41.112032    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.156131   73815 kubeadm.go:310] W0818 20:12:41.113028    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.270071   73815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:39.183290   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:41.682301   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.777979   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:42.779754   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:44.779992   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:43.683501   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:46.181489   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.616338   73815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:12:49.616432   73815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:12:49.616546   73815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:12:49.616675   73815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:12:49.616784   73815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:12:49.616877   73815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:12:49.618287   73815 out.go:235]   - Generating certificates and keys ...
	I0818 20:12:49.618354   73815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:12:49.618414   73815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:12:49.618486   73815 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:12:49.618537   73815 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:12:49.618598   73815 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:12:49.618648   73815 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:12:49.618700   73815 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:12:49.618779   73815 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:12:49.618892   73815 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:12:49.619007   73815 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:12:49.619065   73815 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:12:49.619163   73815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:12:49.619214   73815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:12:49.619269   73815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:12:49.619331   73815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:12:49.619436   73815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:12:49.619486   73815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:12:49.619556   73815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:12:49.619619   73815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:12:49.621003   73815 out.go:235]   - Booting up control plane ...
	I0818 20:12:49.621109   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:12:49.621195   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:12:49.621272   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:12:49.621380   73815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:12:49.621464   73815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:12:49.621507   73815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:12:49.621621   73815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:12:49.621715   73815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:12:49.621773   73815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.427168ms
	I0818 20:12:49.621843   73815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:12:49.621894   73815 kubeadm.go:310] [api-check] The API server is healthy after 5.00297116s
	I0818 20:12:49.621989   73815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:12:49.622127   73815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:12:49.622192   73815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:12:49.622366   73815 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-291295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:12:49.622416   73815 kubeadm.go:310] [bootstrap-token] Using token: y7e2le.i0q1jk5v0c0u0zuw
	I0818 20:12:49.623896   73815 out.go:235]   - Configuring RBAC rules ...
	I0818 20:12:49.623979   73815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:12:49.624091   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:12:49.624245   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:12:49.624354   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:12:49.624455   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:12:49.624526   73815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:12:49.624621   73815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:12:49.624675   73815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:12:49.624718   73815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:12:49.624724   73815 kubeadm.go:310] 
	I0818 20:12:49.624819   73815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:12:49.624835   73815 kubeadm.go:310] 
	I0818 20:12:49.624933   73815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:12:49.624943   73815 kubeadm.go:310] 
	I0818 20:12:49.624975   73815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:12:49.625066   73815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:12:49.625122   73815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:12:49.625135   73815 kubeadm.go:310] 
	I0818 20:12:49.625210   73815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:12:49.625217   73815 kubeadm.go:310] 
	I0818 20:12:49.625285   73815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:12:49.625295   73815 kubeadm.go:310] 
	I0818 20:12:49.625364   73815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:12:49.625469   73815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:12:49.625552   73815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:12:49.625563   73815 kubeadm.go:310] 
	I0818 20:12:49.625675   73815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:12:49.625756   73815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:12:49.625763   73815 kubeadm.go:310] 
	I0818 20:12:49.625858   73815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.625943   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:12:49.625967   73815 kubeadm.go:310] 	--control-plane 
	I0818 20:12:49.625976   73815 kubeadm.go:310] 
	I0818 20:12:49.626089   73815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:12:49.626099   73815 kubeadm.go:310] 
	I0818 20:12:49.626196   73815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.626293   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:12:49.626302   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:12:49.626308   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:12:49.627714   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:12:47.280266   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.779502   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.628998   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:12:49.639640   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:12:49.657017   73815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-291295 minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=embed-certs-291295 minikube.k8s.io/primary=true
	I0818 20:12:49.685420   73815 ops.go:34] apiserver oom_adj: -16
	I0818 20:12:49.868146   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.368174   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.868256   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.368427   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.868632   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:52.368585   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:48.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:50.681743   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.683179   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.869122   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.368635   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.869162   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.368223   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.490893   73815 kubeadm.go:1113] duration metric: took 4.833865719s to wait for elevateKubeSystemPrivileges
	I0818 20:12:54.490919   73815 kubeadm.go:394] duration metric: took 4m58.032922921s to StartCluster
	I0818 20:12:54.490936   73815 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.491011   73815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:12:54.492769   73815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.493007   73815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:12:54.493069   73815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:12:54.493160   73815 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-291295"
	I0818 20:12:54.493186   73815 addons.go:69] Setting default-storageclass=true in profile "embed-certs-291295"
	I0818 20:12:54.493208   73815 addons.go:69] Setting metrics-server=true in profile "embed-certs-291295"
	I0818 20:12:54.493226   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:12:54.493234   73815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-291295"
	I0818 20:12:54.493250   73815 addons.go:234] Setting addon metrics-server=true in "embed-certs-291295"
	W0818 20:12:54.493263   73815 addons.go:243] addon metrics-server should already be in state true
	I0818 20:12:54.493293   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493197   73815 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-291295"
	W0818 20:12:54.493423   73815 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:12:54.493454   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493667   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493695   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493799   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493824   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493839   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493856   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.494988   73815 out.go:177] * Verifying Kubernetes components...
	I0818 20:12:54.496631   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0818 20:12:54.510362   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0818 20:12:54.510861   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510893   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510904   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.511362   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511394   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511392   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511411   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511512   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511532   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511721   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511770   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511858   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.512040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.512246   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512269   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512275   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.512287   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.515662   73815 addons.go:234] Setting addon default-storageclass=true in "embed-certs-291295"
	W0818 20:12:54.515684   73815 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:12:54.515713   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.516066   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.516113   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.532752   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0818 20:12:54.532798   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0818 20:12:54.533454   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.533570   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.534099   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534122   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534237   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534256   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534374   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534590   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534626   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.534665   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0818 20:12:54.534909   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.535373   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.535793   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.535808   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.536326   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.536411   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.536941   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.538860   73815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:12:54.538862   73815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:12:52.279487   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.279652   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.539061   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.539290   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.540006   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:12:54.540024   73815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:12:54.540043   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.540104   73815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.540119   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:12:54.540144   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.543782   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544017   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544131   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544154   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544293   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544565   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.544734   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.544754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544887   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.545060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.545257   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.545502   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.558292   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0818 20:12:54.558721   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.559184   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.559200   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.559579   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.559764   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.561412   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.562138   73815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.562153   73815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:12:54.562169   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.565078   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565524   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.565543   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565782   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.565954   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.566107   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.566265   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.738286   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:12:54.804581   73815 node_ready.go:35] waiting up to 6m0s for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813953   73815 node_ready.go:49] node "embed-certs-291295" has status "Ready":"True"
	I0818 20:12:54.813984   73815 node_ready.go:38] duration metric: took 9.367719ms for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813995   73815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:54.820670   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:12:54.884787   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:12:54.884808   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:12:54.891500   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.917894   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.939854   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:12:54.939873   73815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:12:55.023663   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:55.023684   73815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:12:55.049846   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:56.106099   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.188173933s)
	I0818 20:12:56.106164   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106173   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106502   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106504   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.106519   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.106529   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106537   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106774   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106788   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107412   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21588373s)
	I0818 20:12:56.107447   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107459   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.107656   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.107729   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.107739   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107747   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.108054   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.108095   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.108105   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.163788   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.163816   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.164087   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.164137   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239269   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189381338s)
	I0818 20:12:56.239327   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239341   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.239712   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.239767   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239748   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.239782   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239792   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.240000   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.240017   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.240028   73815 addons.go:475] Verifying addon metrics-server=true in "embed-certs-291295"
	I0818 20:12:56.241750   73815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:12:56.243157   73815 addons.go:510] duration metric: took 1.750082977s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:12:56.827912   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:55.184449   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:57.676039   74485 pod_ready.go:82] duration metric: took 4m0.000245975s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:57.676064   74485 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:57.676106   74485 pod_ready.go:39] duration metric: took 4m11.533331444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:57.676138   74485 kubeadm.go:597] duration metric: took 4m20.628972956s to restartPrimaryControlPlane
	W0818 20:12:57.676203   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:57.676230   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:56.778171   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:58.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:59.328683   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.331560   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.281134   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.281507   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.828543   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.828572   73815 pod_ready.go:82] duration metric: took 9.007869564s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.828586   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833396   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.833416   73815 pod_ready.go:82] duration metric: took 4.823533ms for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833426   73815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837837   73815 pod_ready.go:93] pod "etcd-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.837856   73815 pod_ready.go:82] duration metric: took 4.422926ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837864   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842646   73815 pod_ready.go:93] pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.842666   73815 pod_ready.go:82] duration metric: took 4.795789ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842675   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846697   73815 pod_ready.go:93] pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.846721   73815 pod_ready.go:82] duration metric: took 4.038999ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846733   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224066   73815 pod_ready.go:93] pod "kube-proxy-8mv85" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.224088   73815 pod_ready.go:82] duration metric: took 377.347897ms for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224097   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624310   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.624337   73815 pod_ready.go:82] duration metric: took 400.233574ms for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624347   73815 pod_ready.go:39] duration metric: took 9.810340936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:04.624363   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:04.624440   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:04.640514   73815 api_server.go:72] duration metric: took 10.147475745s to wait for apiserver process to appear ...
	I0818 20:13:04.640543   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:04.640565   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:13:04.646120   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:13:04.646969   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:04.646989   73815 api_server.go:131] duration metric: took 6.438722ms to wait for apiserver health ...
	I0818 20:13:04.646999   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:04.828347   73815 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:04.828385   73815 system_pods.go:61] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:04.828393   73815 system_pods.go:61] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:04.828398   73815 system_pods.go:61] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:04.828403   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:04.828416   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:04.828420   73815 system_pods.go:61] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:04.828425   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:04.828434   73815 system_pods.go:61] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:04.828441   73815 system_pods.go:61] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:04.828453   73815 system_pods.go:74] duration metric: took 181.44906ms to wait for pod list to return data ...
	I0818 20:13:04.828465   73815 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:05.030945   73815 default_sa.go:45] found service account: "default"
	I0818 20:13:05.030971   73815 default_sa.go:55] duration metric: took 202.497269ms for default service account to be created ...
	I0818 20:13:05.030981   73815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:05.226724   73815 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:05.226760   73815 system_pods.go:89] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:05.226769   73815 system_pods.go:89] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:05.226775   73815 system_pods.go:89] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:05.226781   73815 system_pods.go:89] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:05.226790   73815 system_pods.go:89] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:05.226795   73815 system_pods.go:89] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:05.226801   73815 system_pods.go:89] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:05.226810   73815 system_pods.go:89] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:05.226820   73815 system_pods.go:89] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:05.226831   73815 system_pods.go:126] duration metric: took 195.843628ms to wait for k8s-apps to be running ...
	I0818 20:13:05.226843   73815 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:05.226892   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:05.242656   73815 system_svc.go:56] duration metric: took 15.80684ms WaitForService to wait for kubelet
	I0818 20:13:05.242681   73815 kubeadm.go:582] duration metric: took 10.749648174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:05.242698   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:05.424616   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:05.424642   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:05.424654   73815 node_conditions.go:105] duration metric: took 181.951421ms to run NodePressure ...
	I0818 20:13:05.424668   73815 start.go:241] waiting for startup goroutines ...
	I0818 20:13:05.424678   73815 start.go:246] waiting for cluster config update ...
	I0818 20:13:05.424692   73815 start.go:255] writing updated cluster config ...
	I0818 20:13:05.425003   73815 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:05.470859   73815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:05.472909   73815 out.go:177] * Done! kubectl is now configured to use "embed-certs-291295" cluster and "default" namespace by default
	I0818 20:13:05.779555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:07.783567   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:10.281617   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:12.780570   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:15.282024   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:17.779399   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:23.788389   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.112134895s)
	I0818 20:13:23.788470   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:23.808611   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:13:23.820139   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:13:23.837253   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:13:23.837282   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:13:23.837345   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:13:23.848522   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:13:23.848595   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:13:23.857891   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:13:23.866756   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:13:23.866814   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:13:23.876332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.885435   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:13:23.885535   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.896120   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:13:23.905471   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:13:23.905565   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:13:23.915157   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:13:23.963756   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:13:23.963830   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:13:24.083423   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:13:24.083592   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:13:24.083733   74485 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:13:24.097967   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:13:24.099859   74485 out.go:235]   - Generating certificates and keys ...
	I0818 20:13:24.099926   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:13:24.100020   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:13:24.100125   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:13:24.100212   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:13:24.100310   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:13:24.100389   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:13:24.100476   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:13:24.100592   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:13:24.100711   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:13:24.100829   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:13:24.100891   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:13:24.100978   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:13:24.298737   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:13:24.592511   74485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:13:24.686316   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:13:24.796124   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:13:24.910646   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:13:24.911060   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:13:24.913486   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:13:20.281479   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:22.779269   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:24.914894   74485 out.go:235]   - Booting up control plane ...
	I0818 20:13:24.915018   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:13:24.915106   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:13:24.915303   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:13:24.938289   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:13:24.944304   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:13:24.944367   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:13:25.078685   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:13:25.078813   74485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:13:25.580725   74485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092954ms
	I0818 20:13:25.580847   74485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:13:25.280695   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:27.285875   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:29.779058   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:30.583574   74485 kubeadm.go:310] [api-check] The API server is healthy after 5.001121585s
	I0818 20:13:30.596453   74485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:13:30.616459   74485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:13:30.647753   74485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:13:30.648063   74485 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-852598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:13:30.661702   74485 kubeadm.go:310] [bootstrap-token] Using token: zx02gp.uvda3nvhhfc3i2l5
	I0818 20:13:30.663166   74485 out.go:235]   - Configuring RBAC rules ...
	I0818 20:13:30.663321   74485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:13:30.671440   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:13:30.682462   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:13:30.690376   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:13:30.699091   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:13:30.704304   74485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:13:30.989576   74485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:13:31.435191   74485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:13:31.989155   74485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:13:31.991090   74485 kubeadm.go:310] 
	I0818 20:13:31.991172   74485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:13:31.991188   74485 kubeadm.go:310] 
	I0818 20:13:31.991285   74485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:13:31.991303   74485 kubeadm.go:310] 
	I0818 20:13:31.991337   74485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:13:31.991506   74485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:13:31.991584   74485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:13:31.991605   74485 kubeadm.go:310] 
	I0818 20:13:31.991710   74485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:13:31.991732   74485 kubeadm.go:310] 
	I0818 20:13:31.991802   74485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:13:31.991814   74485 kubeadm.go:310] 
	I0818 20:13:31.991881   74485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:13:31.991986   74485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:13:31.992101   74485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:13:31.992132   74485 kubeadm.go:310] 
	I0818 20:13:31.992250   74485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:13:31.992345   74485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:13:31.992358   74485 kubeadm.go:310] 
	I0818 20:13:31.992464   74485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.992601   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:13:31.992637   74485 kubeadm.go:310] 	--control-plane 
	I0818 20:13:31.992650   74485 kubeadm.go:310] 
	I0818 20:13:31.992760   74485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:13:31.992778   74485 kubeadm.go:310] 
	I0818 20:13:31.992882   74485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.993030   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:13:31.994898   74485 kubeadm.go:310] W0818 20:13:23.918436    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995217   74485 kubeadm.go:310] W0818 20:13:23.919152    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995365   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:13:31.995413   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:13:31.995423   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:13:31.997188   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:13:31.998506   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:13:32.011472   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:13:32.031405   74485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:13:32.031449   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.031494   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-852598 minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=default-k8s-diff-port-852598 minikube.k8s.io/primary=true
	I0818 20:13:32.244997   74485 ops.go:34] apiserver oom_adj: -16
	I0818 20:13:32.245096   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.745775   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.279538   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:32.779152   73711 pod_ready.go:82] duration metric: took 4m0.006755386s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:13:32.779180   73711 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 20:13:32.779190   73711 pod_ready.go:39] duration metric: took 4m7.418715902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:32.779207   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:32.779240   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:32.779298   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:32.848109   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:32.848132   73711 cri.go:89] found id: ""
	I0818 20:13:32.848141   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:32.848201   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.852725   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:32.852789   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:32.899932   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:32.899957   73711 cri.go:89] found id: ""
	I0818 20:13:32.899969   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:32.900028   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.904698   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:32.904771   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:32.945320   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:32.945347   73711 cri.go:89] found id: ""
	I0818 20:13:32.945355   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:32.945411   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.949873   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:32.949935   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:32.986388   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:32.986409   73711 cri.go:89] found id: ""
	I0818 20:13:32.986415   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:32.986465   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.992213   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:32.992292   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:33.035535   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:33.035557   73711 cri.go:89] found id: ""
	I0818 20:13:33.035564   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:33.035622   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.039933   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:33.040006   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:33.077372   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:33.077395   73711 cri.go:89] found id: ""
	I0818 20:13:33.077404   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:33.077468   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.082254   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:33.082327   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:33.120142   73711 cri.go:89] found id: ""
	I0818 20:13:33.120181   73711 logs.go:276] 0 containers: []
	W0818 20:13:33.120192   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:33.120199   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:33.120267   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:33.159065   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:33.159089   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.159095   73711 cri.go:89] found id: ""
	I0818 20:13:33.159104   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:33.159164   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.163366   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.167301   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:33.167327   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:33.207982   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:33.208012   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:33.734525   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:33.734563   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:33.779286   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:33.779334   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:33.915330   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:33.915365   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:33.930057   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:33.930088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:33.978282   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:33.978312   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:34.021464   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:34.021495   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:34.058242   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:34.058271   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:34.094203   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:34.094231   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:34.157812   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:34.157849   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:34.196259   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:34.196288   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:34.273774   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:34.273818   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.245388   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:33.745166   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.245920   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.745548   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.245436   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.745269   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.245383   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.384146   74485 kubeadm.go:1113] duration metric: took 4.352781371s to wait for elevateKubeSystemPrivileges
	I0818 20:13:36.384182   74485 kubeadm.go:394] duration metric: took 4m59.395903283s to StartCluster
	I0818 20:13:36.384199   74485 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.384286   74485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:13:36.385964   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.386201   74485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:13:36.386320   74485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:13:36.386400   74485 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386423   74485 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386440   74485 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386458   74485 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386470   74485 addons.go:243] addon metrics-server should already be in state true
	I0818 20:13:36.386477   74485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-852598"
	I0818 20:13:36.386514   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386434   74485 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386567   74485 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:13:36.386612   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386435   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:13:36.386858   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386887   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386915   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386948   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386982   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.387015   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.387748   74485 out.go:177] * Verifying Kubernetes components...
	I0818 20:13:36.389177   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:13:36.402895   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0818 20:13:36.402928   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0818 20:13:36.403477   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.403479   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404111   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404120   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404519   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404525   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404795   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.405161   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.405192   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.405739   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0818 20:13:36.406246   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.406753   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.406779   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.407167   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.407726   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.407771   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.408687   74485 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.408710   74485 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:13:36.408736   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.409073   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.409120   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.423471   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 20:13:36.423953   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.424569   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.424588   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.424652   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0818 20:13:36.424966   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.425039   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.425257   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.425447   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.425462   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.425911   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.426098   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.427104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.427772   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.428108   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0818 20:13:36.428438   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.428794   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.428816   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.429092   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.429645   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.429696   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.429708   74485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:13:36.429758   74485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:13:36.431859   74485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.431879   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:13:36.431898   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.431958   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:13:36.431969   74485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:13:36.431983   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.435295   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435730   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.435757   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436192   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.436238   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.436254   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.436312   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.436528   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436570   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.436890   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.437171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.437355   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.447762   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0818 20:13:36.448303   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.448694   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.448713   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.449011   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.449160   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.450722   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.450918   74485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.450935   74485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:13:36.450954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.453529   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.453969   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.453992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.454163   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.454862   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.455104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.455246   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.606178   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:13:36.628852   74485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702927   74485 node_ready.go:49] node "default-k8s-diff-port-852598" has status "Ready":"True"
	I0818 20:13:36.702956   74485 node_ready.go:38] duration metric: took 74.077289ms for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702968   74485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:36.713446   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:36.726670   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:13:36.726689   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:13:36.741673   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.784451   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.790772   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:13:36.790798   74485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:13:36.845289   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:36.845315   74485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:13:36.914259   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:37.542511   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542538   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542559   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542543   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542874   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542914   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542922   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542932   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542941   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542953   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542963   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542971   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.543114   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.543123   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545016   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.545041   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545059   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572618   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.572643   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.572953   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572976   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.572989   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.793891   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.793918   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794436   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.794453   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794467   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794479   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.794487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794747   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794762   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794774   74485 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-852598"
	I0818 20:13:37.796423   74485 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:13:36.814874   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:36.838208   73711 api_server.go:72] duration metric: took 4m18.723396382s to wait for apiserver process to appear ...
	I0818 20:13:36.838234   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:36.838276   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:36.838334   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:36.890010   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:36.890036   73711 cri.go:89] found id: ""
	I0818 20:13:36.890046   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:36.890108   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.895675   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:36.895753   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:36.953110   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:36.953162   73711 cri.go:89] found id: ""
	I0818 20:13:36.953172   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:36.953230   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.959359   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:36.959456   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:37.011217   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.011248   73711 cri.go:89] found id: ""
	I0818 20:13:37.011258   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:37.011333   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.016895   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:37.016988   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:37.067705   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:37.067728   73711 cri.go:89] found id: ""
	I0818 20:13:37.067737   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:37.067794   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.073259   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:37.073332   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:37.112192   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.112216   73711 cri.go:89] found id: ""
	I0818 20:13:37.112226   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:37.112285   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.116988   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:37.117060   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:37.153720   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.153744   73711 cri.go:89] found id: ""
	I0818 20:13:37.153753   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:37.153811   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.158160   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:37.158226   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:37.197088   73711 cri.go:89] found id: ""
	I0818 20:13:37.197120   73711 logs.go:276] 0 containers: []
	W0818 20:13:37.197143   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:37.197151   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:37.197215   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:37.241214   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:37.241242   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:37.241248   73711 cri.go:89] found id: ""
	I0818 20:13:37.241257   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:37.241317   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.246159   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.250431   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:37.250460   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:37.313787   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:37.313817   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:37.333235   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:37.333263   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:37.461197   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:37.461236   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.505314   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:37.505343   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.576096   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:37.576121   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:38.083667   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:38.083702   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:38.128922   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:38.128947   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:38.170807   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:38.170842   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:38.265750   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:38.265784   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:38.323224   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:38.323269   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:38.372486   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:38.372530   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:38.413945   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:38.413986   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.798152   74485 addons.go:510] duration metric: took 1.411833485s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:13:38.719805   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.720446   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:40.720472   74485 pod_ready.go:82] duration metric: took 4.00699808s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:40.720482   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:42.728159   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.955186   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:13:40.960201   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:13:40.961240   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:40.961260   73711 api_server.go:131] duration metric: took 4.123017717s to wait for apiserver health ...
	I0818 20:13:40.961273   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:40.961298   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:40.961350   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:41.012093   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.012113   73711 cri.go:89] found id: ""
	I0818 20:13:41.012121   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:41.012172   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.016282   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:41.016337   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:41.063834   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.063861   73711 cri.go:89] found id: ""
	I0818 20:13:41.063871   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:41.063930   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.068645   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:41.068724   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:41.117544   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:41.117565   73711 cri.go:89] found id: ""
	I0818 20:13:41.117573   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:41.117626   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.121916   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:41.121985   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:41.161641   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:41.161660   73711 cri.go:89] found id: ""
	I0818 20:13:41.161667   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:41.161720   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.165727   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:41.165778   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:41.207519   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:41.207544   73711 cri.go:89] found id: ""
	I0818 20:13:41.207554   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:41.207615   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.212114   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:41.212171   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:41.255480   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:41.255501   73711 cri.go:89] found id: ""
	I0818 20:13:41.255508   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:41.255560   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.259585   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:41.259635   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:41.312099   73711 cri.go:89] found id: ""
	I0818 20:13:41.312124   73711 logs.go:276] 0 containers: []
	W0818 20:13:41.312131   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:41.312137   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:41.312201   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:41.358622   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:41.358647   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.358653   73711 cri.go:89] found id: ""
	I0818 20:13:41.358662   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:41.358723   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.363210   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.367271   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:41.367294   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.406329   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:41.406355   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:41.768140   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:41.768175   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:41.811010   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:41.811035   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:41.886206   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:41.886240   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.938249   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:41.938284   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.977289   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:41.977317   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:42.018606   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:42.018630   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:42.055557   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:42.055581   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:42.070467   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:42.070494   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:42.182068   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:42.182100   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:42.219346   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:42.219373   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:42.262193   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:42.262221   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:44.839152   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:13:44.839181   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.839186   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.839191   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.839194   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.839197   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.839200   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.839206   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.839212   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.839218   73711 system_pods.go:74] duration metric: took 3.877940537s to wait for pod list to return data ...
	I0818 20:13:44.839225   73711 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:44.841877   73711 default_sa.go:45] found service account: "default"
	I0818 20:13:44.841896   73711 default_sa.go:55] duration metric: took 2.662355ms for default service account to be created ...
	I0818 20:13:44.841904   73711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:44.846214   73711 system_pods.go:86] 8 kube-system pods found
	I0818 20:13:44.846240   73711 system_pods.go:89] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.846247   73711 system_pods.go:89] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.846252   73711 system_pods.go:89] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.846259   73711 system_pods.go:89] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.846264   73711 system_pods.go:89] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.846269   73711 system_pods.go:89] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.846279   73711 system_pods.go:89] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.846286   73711 system_pods.go:89] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.846296   73711 system_pods.go:126] duration metric: took 4.386348ms to wait for k8s-apps to be running ...
	I0818 20:13:44.846305   73711 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:44.846356   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:44.863225   73711 system_svc.go:56] duration metric: took 16.912117ms WaitForService to wait for kubelet
	I0818 20:13:44.863262   73711 kubeadm.go:582] duration metric: took 4m26.748456958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:44.863287   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:44.866049   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:44.866069   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:44.866082   73711 node_conditions.go:105] duration metric: took 2.789471ms to run NodePressure ...
	I0818 20:13:44.866095   73711 start.go:241] waiting for startup goroutines ...
	I0818 20:13:44.866103   73711 start.go:246] waiting for cluster config update ...
	I0818 20:13:44.866135   73711 start.go:255] writing updated cluster config ...
	I0818 20:13:44.866415   73711 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:44.914902   73711 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:44.916929   73711 out.go:177] * Done! kubectl is now configured to use "no-preload-944426" cluster and "default" namespace by default
	I0818 20:13:45.226521   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:47.226773   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:48.227026   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.227050   74485 pod_ready.go:82] duration metric: took 7.506560684s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.227061   74485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231313   74485 pod_ready.go:93] pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.231336   74485 pod_ready.go:82] duration metric: took 4.268255ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231345   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235228   74485 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.235249   74485 pod_ready.go:82] duration metric: took 3.897729ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235259   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238872   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.238889   74485 pod_ready.go:82] duration metric: took 3.623044ms for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238897   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243264   74485 pod_ready.go:93] pod "kube-proxy-hmvsl" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.243282   74485 pod_ready.go:82] duration metric: took 4.378808ms for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243292   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625076   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.625101   74485 pod_ready.go:82] duration metric: took 381.800619ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625111   74485 pod_ready.go:39] duration metric: took 11.92213071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:48.625128   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:48.625193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:48.640038   74485 api_server.go:72] duration metric: took 12.253809178s to wait for apiserver process to appear ...
	I0818 20:13:48.640061   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:48.640081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:13:48.644433   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:13:48.645289   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:48.645306   74485 api_server.go:131] duration metric: took 5.239358ms to wait for apiserver health ...
	I0818 20:13:48.645313   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:48.829655   74485 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:48.829698   74485 system_pods.go:61] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:48.829706   74485 system_pods.go:61] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:48.829718   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:48.829726   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:48.829731   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:48.829737   74485 system_pods.go:61] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:48.829742   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:48.829754   74485 system_pods.go:61] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:48.829760   74485 system_pods.go:61] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:48.829770   74485 system_pods.go:74] duration metric: took 184.451133ms to wait for pod list to return data ...
	I0818 20:13:48.829783   74485 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:49.023954   74485 default_sa.go:45] found service account: "default"
	I0818 20:13:49.023982   74485 default_sa.go:55] duration metric: took 194.191689ms for default service account to be created ...
	I0818 20:13:49.023992   74485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:49.227864   74485 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:49.227892   74485 system_pods.go:89] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:49.227898   74485 system_pods.go:89] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:49.227902   74485 system_pods.go:89] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:49.227907   74485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:49.227911   74485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:49.227915   74485 system_pods.go:89] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:49.227918   74485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:49.227925   74485 system_pods.go:89] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:49.227930   74485 system_pods.go:89] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:49.227936   74485 system_pods.go:126] duration metric: took 203.939768ms to wait for k8s-apps to be running ...
	I0818 20:13:49.227945   74485 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:49.227989   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:49.242762   74485 system_svc.go:56] duration metric: took 14.808746ms WaitForService to wait for kubelet
	I0818 20:13:49.242793   74485 kubeadm.go:582] duration metric: took 12.856565711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:49.242819   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:49.425517   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:49.425543   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:49.425555   74485 node_conditions.go:105] duration metric: took 182.731125ms to run NodePressure ...
	I0818 20:13:49.425569   74485 start.go:241] waiting for startup goroutines ...
	I0818 20:13:49.425577   74485 start.go:246] waiting for cluster config update ...
	I0818 20:13:49.425588   74485 start.go:255] writing updated cluster config ...
	I0818 20:13:49.425898   74485 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:49.473176   74485 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:49.475285   74485 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-852598" cluster and "default" namespace by default
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 
	
	
	==> CRI-O <==
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.453382563Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f518f87205216bb91cfd93e4e69aa3075bef1064f921da48c479d9b815481e47,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e89c78dc-0141-45b6-889c-9381599a39e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011976397625635,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89c78dc-0141-45b6-889c-9381599a39e2,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-18T20:12:56.091035916Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fcd4659a45cd4144f3bfd1b628bbba6d8c52945cd3521658c7c0d6ab3fcaa26d,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-q9hsn,Uid:91faef36-1509-4f19-8ac7-e72e242d46a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011976339225104,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-q9hsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91faef36-1509-4f19-8ac7-e72e242d46a
4,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T20:12:56.030954839Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a123aa652725925e80353cabc065447d8477f1bc1f36b623dd89e1a46467e1f,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-fx7zv,Uid:42876c85-5d36-47b3-ba18-2cc7e3edcfd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011974522480929,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-fx7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42876c85-5d36-47b3-ba18-2cc7e3edcfd2,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T20:12:54.213396033Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2019feb2f004cc27b2d9bdeec8906e4fed0c653c4208b489f14a87e63febfd4e,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-6785z,Uid:6e4a0570-184c-4de8
-a23d-05cc0409a71f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011974503534749,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-6785z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e4a0570-184c-4de8-a23d-05cc0409a71f,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T20:12:54.193413296Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d190df7bad335f62164e16f08abffd80b09fcafb3029b62bab3e7e712bf1f03,Metadata:&PodSandboxMetadata{Name:kube-proxy-8mv85,Uid:f46ec5d3-9303-47c1-b374-b0402d54427d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011974417739707,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8mv85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46ec5d3-9303-47c1-b374-b0402d54427d,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-18T20:12:54.101301593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ea0cb042a7fd8d22433b258b60007137dc4d96d023a890065754809482f806d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-291295,Uid:9d24d6fae092fccff3d46bd40de74db5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1724011963535098781,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.125:8443,kubernetes.io/config.hash: 9d24d6fae092fccff3d46bd40de74db5,kubernetes.io/config.seen: 2024-08-18T20:12:43.076425154Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e00431662dad23fb0af046ea9266
83b6781d17bd7dd2f5b3ea1735b80d9c8e77,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-291295,Uid:e4b3f6826255983bf4f8dc44ddd29d67,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011963525250749,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b3f6826255983bf4f8dc44ddd29d67,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e4b3f6826255983bf4f8dc44ddd29d67,kubernetes.io/config.seen: 2024-08-18T20:12:43.076426657Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a91035800220aae2b589bd7e31946c854c20014fcc5049f10bf3c8640d63295e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-291295,Uid:07282ec0a77ec2e6b0a7e2b3a0a6b2d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011963514183033,Labels:map[string]string{component: ku
be-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,kubernetes.io/config.seen: 2024-08-18T20:12:43.076427911Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5ac93ea9b904148a749ab7adc26125201f8b4ca83c8a1b9e6ae63260e198b27e,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-291295,Uid:8985672dbacf5e7fbe155505efa34c2c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724011963505193096,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8985672dbacf5e7fbe155505efa34c2c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.3
9.125:2379,kubernetes.io/config.hash: 8985672dbacf5e7fbe155505efa34c2c,kubernetes.io/config.seen: 2024-08-18T20:12:43.076421983Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:25c7699f0112af6a8032de477d66766ac8f6fd5fd054700742d3b5a8e5175e36,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-291295,Uid:9d24d6fae092fccff3d46bd40de74db5,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1724011678272759293,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.125:8443,kubernetes.io/config.hash: 9d24d6fae092fccff3d46bd40de74db5,kubernetes.io/config.seen: 2024-08-18T20:07:57.812854354Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=801d699e-a017-43dc-b9c2-2b3f14905739 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.454322561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ad23f1b-65ef-4fef-a56f-a5fafa5e7d4e name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.454398239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ad23f1b-65ef-4fef-a56f-a5fafa5e7d4e name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.454724081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090,PodSandboxId:f518f87205216bb91cfd93e4e69aa3075bef1064f921da48c479d9b815481e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011976512646308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89c78dc-0141-45b6-889c-9381599a39e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec,PodSandboxId:4a123aa652725925e80353cabc065447d8477f1bc1f36b623dd89e1a46467e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975557122055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fx7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42876c85-5d36-47b3-ba18-2cc7e3edcfd2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05,PodSandboxId:2019feb2f004cc27b2d9bdeec8906e4fed0c653c4208b489f14a87e63febfd4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975391440415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6785z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
e4a0570-184c-4de8-a23d-05cc0409a71f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc,PodSandboxId:9d190df7bad335f62164e16f08abffd80b09fcafb3029b62bab3e7e712bf1f03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724011974708669136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8mv85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46ec5d3-9303-47c1-b374-b0402d54427d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b,PodSandboxId:8ea0cb042a7fd8d22433b258b60007137dc4d96d023a890065754809482f806d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011963799283434,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4,PodSandboxId:e00431662dad23fb0af046ea926683b6781d17bd7dd2f5b3ea1735b80d9c8e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011963764829802,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b3f6826255983bf4f8dc44ddd29d67,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690,PodSandboxId:5ac93ea9b904148a749ab7adc26125201f8b4ca83c8a1b9e6ae63260e198b27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011963752073740,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8985672dbacf5e7fbe155505efa34c2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289,PodSandboxId:a91035800220aae2b589bd7e31946c854c20014fcc5049f10bf3c8640d63295e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011963689093858,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd,PodSandboxId:25c7699f0112af6a8032de477d66766ac8f6fd5fd054700742d3b5a8e5175e36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011679722597694,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ad23f1b-65ef-4fef-a56f-a5fafa5e7d4e name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.474780295Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39ade050-e40b-48de-bbaa-fd8b01047126 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.474860504Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39ade050-e40b-48de-bbaa-fd8b01047126 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.476637069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2eebfe01-ccda-4e19-8ada-b660213a38c9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.477569126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012909477439103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2eebfe01-ccda-4e19-8ada-b660213a38c9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.478174401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24471855-cda0-4f44-9888-a2759c1af4f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.478223770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24471855-cda0-4f44-9888-a2759c1af4f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.478416156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090,PodSandboxId:f518f87205216bb91cfd93e4e69aa3075bef1064f921da48c479d9b815481e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011976512646308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89c78dc-0141-45b6-889c-9381599a39e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec,PodSandboxId:4a123aa652725925e80353cabc065447d8477f1bc1f36b623dd89e1a46467e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975557122055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fx7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42876c85-5d36-47b3-ba18-2cc7e3edcfd2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05,PodSandboxId:2019feb2f004cc27b2d9bdeec8906e4fed0c653c4208b489f14a87e63febfd4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975391440415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6785z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
e4a0570-184c-4de8-a23d-05cc0409a71f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc,PodSandboxId:9d190df7bad335f62164e16f08abffd80b09fcafb3029b62bab3e7e712bf1f03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724011974708669136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8mv85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46ec5d3-9303-47c1-b374-b0402d54427d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b,PodSandboxId:8ea0cb042a7fd8d22433b258b60007137dc4d96d023a890065754809482f806d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011963799283434,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4,PodSandboxId:e00431662dad23fb0af046ea926683b6781d17bd7dd2f5b3ea1735b80d9c8e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011963764829802,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b3f6826255983bf4f8dc44ddd29d67,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690,PodSandboxId:5ac93ea9b904148a749ab7adc26125201f8b4ca83c8a1b9e6ae63260e198b27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011963752073740,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8985672dbacf5e7fbe155505efa34c2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289,PodSandboxId:a91035800220aae2b589bd7e31946c854c20014fcc5049f10bf3c8640d63295e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011963689093858,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd,PodSandboxId:25c7699f0112af6a8032de477d66766ac8f6fd5fd054700742d3b5a8e5175e36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011679722597694,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24471855-cda0-4f44-9888-a2759c1af4f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.515064432Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e8fcc10-fa06-4e73-88a1-16338b872d3a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.515132779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e8fcc10-fa06-4e73-88a1-16338b872d3a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.516049524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1bcb9bb-37e9-4565-8136-19a9a4e10b9b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.516431150Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012909516411728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1bcb9bb-37e9-4565-8136-19a9a4e10b9b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.516950213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6c6db68-d3e8-4331-ae9c-184d75babd5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.517020206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6c6db68-d3e8-4331-ae9c-184d75babd5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.517217076Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090,PodSandboxId:f518f87205216bb91cfd93e4e69aa3075bef1064f921da48c479d9b815481e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011976512646308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89c78dc-0141-45b6-889c-9381599a39e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec,PodSandboxId:4a123aa652725925e80353cabc065447d8477f1bc1f36b623dd89e1a46467e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975557122055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fx7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42876c85-5d36-47b3-ba18-2cc7e3edcfd2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05,PodSandboxId:2019feb2f004cc27b2d9bdeec8906e4fed0c653c4208b489f14a87e63febfd4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975391440415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6785z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
e4a0570-184c-4de8-a23d-05cc0409a71f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc,PodSandboxId:9d190df7bad335f62164e16f08abffd80b09fcafb3029b62bab3e7e712bf1f03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724011974708669136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8mv85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46ec5d3-9303-47c1-b374-b0402d54427d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b,PodSandboxId:8ea0cb042a7fd8d22433b258b60007137dc4d96d023a890065754809482f806d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011963799283434,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4,PodSandboxId:e00431662dad23fb0af046ea926683b6781d17bd7dd2f5b3ea1735b80d9c8e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011963764829802,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b3f6826255983bf4f8dc44ddd29d67,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690,PodSandboxId:5ac93ea9b904148a749ab7adc26125201f8b4ca83c8a1b9e6ae63260e198b27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011963752073740,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8985672dbacf5e7fbe155505efa34c2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289,PodSandboxId:a91035800220aae2b589bd7e31946c854c20014fcc5049f10bf3c8640d63295e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011963689093858,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd,PodSandboxId:25c7699f0112af6a8032de477d66766ac8f6fd5fd054700742d3b5a8e5175e36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011679722597694,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6c6db68-d3e8-4331-ae9c-184d75babd5c name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.554813227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdaddfe4-1947-4428-aba4-d86205caae13 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.554903622Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdaddfe4-1947-4428-aba4-d86205caae13 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.555811538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd82556f-a74e-48a7-a93d-984f6db2adee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.556200341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012909556180954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd82556f-a74e-48a7-a93d-984f6db2adee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.556730318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b569768-bd46-47f9-b805-7e86191747ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.556806830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b569768-bd46-47f9-b805-7e86191747ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:29 embed-certs-291295 crio[726]: time="2024-08-18 20:28:29.556998262Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090,PodSandboxId:f518f87205216bb91cfd93e4e69aa3075bef1064f921da48c479d9b815481e47,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011976512646308,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e89c78dc-0141-45b6-889c-9381599a39e2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec,PodSandboxId:4a123aa652725925e80353cabc065447d8477f1bc1f36b623dd89e1a46467e1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975557122055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fx7zv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42876c85-5d36-47b3-ba18-2cc7e3edcfd2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05,PodSandboxId:2019feb2f004cc27b2d9bdeec8906e4fed0c653c4208b489f14a87e63febfd4e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011975391440415,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6785z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
e4a0570-184c-4de8-a23d-05cc0409a71f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc,PodSandboxId:9d190df7bad335f62164e16f08abffd80b09fcafb3029b62bab3e7e712bf1f03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724011974708669136,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8mv85,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46ec5d3-9303-47c1-b374-b0402d54427d,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b,PodSandboxId:8ea0cb042a7fd8d22433b258b60007137dc4d96d023a890065754809482f806d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011963799283434,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4,PodSandboxId:e00431662dad23fb0af046ea926683b6781d17bd7dd2f5b3ea1735b80d9c8e77,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011963764829802,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4b3f6826255983bf4f8dc44ddd29d67,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690,PodSandboxId:5ac93ea9b904148a749ab7adc26125201f8b4ca83c8a1b9e6ae63260e198b27e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011963752073740,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8985672dbacf5e7fbe155505efa34c2c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289,PodSandboxId:a91035800220aae2b589bd7e31946c854c20014fcc5049f10bf3c8640d63295e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011963689093858,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07282ec0a77ec2e6b0a7e2b3a0a6b2d8,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd,PodSandboxId:25c7699f0112af6a8032de477d66766ac8f6fd5fd054700742d3b5a8e5175e36,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011679722597694,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-291295,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d24d6fae092fccff3d46bd40de74db5,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b569768-bd46-47f9-b805-7e86191747ab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f0de7e681aa8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   f518f87205216       storage-provisioner
	f4e0e21856013       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   4a123aa652725       coredns-6f6b679f8f-fx7zv
	f267f32a822b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   2019feb2f004c       coredns-6f6b679f8f-6785z
	e424c63d99aaa       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   15 minutes ago      Running             kube-proxy                0                   9d190df7bad33       kube-proxy-8mv85
	1b2a5ae4a234a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   15 minutes ago      Running             kube-apiserver            2                   8ea0cb042a7fd       kube-apiserver-embed-certs-291295
	4d4358c293b3f       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   15 minutes ago      Running             kube-controller-manager   2                   e00431662dad2       kube-controller-manager-embed-certs-291295
	c0fc030335e27       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   5ac93ea9b9041       etcd-embed-certs-291295
	bed4b8074227b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   15 minutes ago      Running             kube-scheduler            2                   a91035800220a       kube-scheduler-embed-certs-291295
	3669d8f48420b       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   20 minutes ago      Exited              kube-apiserver            1                   25c7699f0112a       kube-apiserver-embed-certs-291295
	
	
	==> coredns [f267f32a822b4419f69e5d72110ac3b3b755035efdd40af41f387c349f2faf05] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f4e0e21856013f03bf81e312f52ea1efbaaf62e71ba5ce73d405c93062cb45ec] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-291295
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-291295
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=embed-certs-291295
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 20:12:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-291295
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 20:28:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 20:28:16 +0000   Sun, 18 Aug 2024 20:12:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 20:28:16 +0000   Sun, 18 Aug 2024 20:12:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 20:28:16 +0000   Sun, 18 Aug 2024 20:12:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 20:28:16 +0000   Sun, 18 Aug 2024 20:12:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    embed-certs-291295
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac8770b9ab2443f4b3f49d534e03a2f9
	  System UUID:                ac8770b9-ab24-43f4-b3f4-9d534e03a2f9
	  Boot ID:                    09586b2d-ed77-4128-a371-c04b89982a74
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-6785z                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-6f6b679f8f-fx7zv                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-291295                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-291295             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-291295    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-8mv85                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-291295             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-6867b74b74-q9hsn               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-291295 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-291295 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-291295 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-291295 event: Registered Node embed-certs-291295 in Controller
	
	
	==> dmesg <==
	[  +0.050152] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040249] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.769839] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.380126] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.632233] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.086018] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.058078] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061516] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.214477] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.135342] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.309673] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[  +4.231008] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.057862] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.640665] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[Aug18 20:08] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.977381] kauditd_printk_skb: 85 callbacks suppressed
	[Aug18 20:12] systemd-fstab-generator[2585]: Ignoring "noauto" option for root device
	[  +0.071066] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.995172] systemd-fstab-generator[2906]: Ignoring "noauto" option for root device
	[  +0.093774] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.814517] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.007216] systemd-fstab-generator[3049]: Ignoring "noauto" option for root device
	[Aug18 20:13] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [c0fc030335e27d7c65b93accd090c367a3caffa5cfaa5488578530b24f52c690] <==
	{"level":"info","ts":"2024-08-18T20:12:44.802686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-18T20:12:44.802745Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgPreVoteResp from f4d3edba9e42b28c at term 1"}
	{"level":"info","ts":"2024-08-18T20:12:44.802780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became candidate at term 2"}
	{"level":"info","ts":"2024-08-18T20:12:44.802805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c received MsgVoteResp from f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2024-08-18T20:12:44.802832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4d3edba9e42b28c became leader at term 2"}
	{"level":"info","ts":"2024-08-18T20:12:44.802858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 2"}
	{"level":"info","ts":"2024-08-18T20:12:44.807735Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:12:44.811809Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"f4d3edba9e42b28c","local-member-attributes":"{Name:embed-certs-291295 ClientURLs:[https://192.168.39.125:2379]}","request-path":"/0/members/f4d3edba9e42b28c/attributes","cluster-id":"9838e9e2cfdaeabf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T20:12:44.814604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:12:44.814994Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:12:44.815230Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9838e9e2cfdaeabf","local-member-id":"f4d3edba9e42b28c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:12:44.815337Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:12:44.815378Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:12:44.816079Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:12:44.821692Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.125:2379"}
	{"level":"info","ts":"2024-08-18T20:12:44.823075Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:12:44.823840Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T20:12:44.831557Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T20:12:44.831681Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T20:22:44.881300Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-08-18T20:22:44.890232Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":682,"took":"8.503754ms","hash":2281567582,"current-db-size-bytes":2179072,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2179072,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-18T20:22:44.890286Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2281567582,"revision":682,"compact-revision":-1}
	{"level":"info","ts":"2024-08-18T20:27:44.889739Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-08-18T20:27:44.893613Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":925,"took":"3.556574ms","hash":3751963735,"current-db-size-bytes":2179072,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-18T20:27:44.893664Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3751963735,"revision":925,"compact-revision":682}
	
	
	==> kernel <==
	 20:28:29 up 20 min,  0 users,  load average: 0.14, 0.15, 0.12
	Linux embed-certs-291295 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1b2a5ae4a234a3fc9295893bd8d6d2bbc520713e53a8d7974d7453111839f18b] <==
	I0818 20:23:47.389568       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:23:47.389635       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:25:47.390131       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:25:47.390285       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:25:47.390140       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:25:47.390336       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0818 20:25:47.391680       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:25:47.391720       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:27:46.391824       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:27:46.392153       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:27:47.393948       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:27:47.394005       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0818 20:27:47.394088       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:27:47.394147       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0818 20:27:47.395147       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:27:47.395190       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [3669d8f48420b8378bf4ce3936cc801bd516587e100d3d568e36f21c21717fdd] <==
	W0818 20:12:39.560358       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.636151       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.643799       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.652267       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.686669       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.696352       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.715917       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.726704       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.753920       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.825574       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.846147       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.873137       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.919373       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.950877       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:39.990897       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.057712       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.061044       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.134172       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.149907       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.184688       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.319357       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.330313       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.485003       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.524071       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:12:40.768436       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4d4358c293b3f4f547b085154c788ef6c45fc06f766e4cc13e5a568a55bab1d4] <==
	E0818 20:23:23.425261       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:23:23.892194       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:23:53.431863       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:23:53.901748       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:24:07.931348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="178.734µs"
	I0818 20:24:19.927898       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.738µs"
	E0818 20:24:23.437983       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:24:23.909255       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:24:53.446776       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:24:53.916986       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:25:23.454281       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:25:23.926281       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:25:53.461271       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:25:53.935728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:26:23.467378       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:26:23.943906       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:26:53.473659       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:26:53.953921       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:27:23.481557       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:27:23.961885       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:27:53.488274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:27:53.970234       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:28:16.366259       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-291295"
	E0818 20:28:23.499297       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:28:23.978474       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e424c63d99aaabb9de6d7b4056b0a046159940363d5e4cd25a09c0e235e0bfbc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 20:12:55.214976       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 20:12:55.244211       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.125"]
	E0818 20:12:55.244311       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 20:12:55.475986       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 20:12:55.476024       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 20:12:55.476060       1 server_linux.go:169] "Using iptables Proxier"
	I0818 20:12:55.482033       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 20:12:55.482286       1 server.go:483] "Version info" version="v1.31.0"
	I0818 20:12:55.482297       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 20:12:55.485063       1 config.go:197] "Starting service config controller"
	I0818 20:12:55.485100       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 20:12:55.485127       1 config.go:104] "Starting endpoint slice config controller"
	I0818 20:12:55.485131       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 20:12:55.485467       1 config.go:326] "Starting node config controller"
	I0818 20:12:55.485558       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 20:12:55.585784       1 shared_informer.go:320] Caches are synced for node config
	I0818 20:12:55.585849       1 shared_informer.go:320] Caches are synced for service config
	I0818 20:12:55.585872       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [bed4b8074227b834867ea635a757b6c39c2c04c1ef63ce22836c5d46eb2d7289] <==
	W0818 20:12:46.446261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 20:12:46.446272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:46.446483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 20:12:46.446553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:46.446600       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 20:12:46.446629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:46.446684       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 20:12:46.446712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.289979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 20:12:47.290055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.362824       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 20:12:47.362911       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0818 20:12:47.382469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 20:12:47.382577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.395803       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0818 20:12:47.395950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.530013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 20:12:47.530472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.530786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 20:12:47.530884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.564388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 20:12:47.564440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:12:47.567607       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 20:12:47.567984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0818 20:12:50.534239       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 20:27:21 embed-certs-291295 kubelet[2913]: E0818 20:27:21.912192    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:27:29 embed-certs-291295 kubelet[2913]: E0818 20:27:29.174445    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012849174116157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:29 embed-certs-291295 kubelet[2913]: E0818 20:27:29.174530    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012849174116157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:33 embed-certs-291295 kubelet[2913]: E0818 20:27:33.913259    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:27:39 embed-certs-291295 kubelet[2913]: E0818 20:27:39.176383    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012859176019202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:39 embed-certs-291295 kubelet[2913]: E0818 20:27:39.176883    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012859176019202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:44 embed-certs-291295 kubelet[2913]: E0818 20:27:44.912289    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:27:48 embed-certs-291295 kubelet[2913]: E0818 20:27:48.938870    2913 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 20:27:48 embed-certs-291295 kubelet[2913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 20:27:48 embed-certs-291295 kubelet[2913]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 20:27:48 embed-certs-291295 kubelet[2913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 20:27:48 embed-certs-291295 kubelet[2913]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 20:27:49 embed-certs-291295 kubelet[2913]: E0818 20:27:49.180633    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012869180014138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:49 embed-certs-291295 kubelet[2913]: E0818 20:27:49.180660    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012869180014138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:56 embed-certs-291295 kubelet[2913]: E0818 20:27:56.912538    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:27:59 embed-certs-291295 kubelet[2913]: E0818 20:27:59.182833    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012879181988044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:59 embed-certs-291295 kubelet[2913]: E0818 20:27:59.182862    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012879181988044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:09 embed-certs-291295 kubelet[2913]: E0818 20:28:09.185591    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012889184825707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:09 embed-certs-291295 kubelet[2913]: E0818 20:28:09.186287    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012889184825707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:11 embed-certs-291295 kubelet[2913]: E0818 20:28:11.912733    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:28:19 embed-certs-291295 kubelet[2913]: E0818 20:28:19.188485    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012899188066036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:19 embed-certs-291295 kubelet[2913]: E0818 20:28:19.188949    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012899188066036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:26 embed-certs-291295 kubelet[2913]: E0818 20:28:26.913039    2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-q9hsn" podUID="91faef36-1509-4f19-8ac7-e72e242d46a4"
	Aug 18 20:28:29 embed-certs-291295 kubelet[2913]: E0818 20:28:29.190728    2913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012909190281968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:29 embed-certs-291295 kubelet[2913]: E0818 20:28:29.190764    2913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012909190281968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f0de7e681aa8ab92e5c7c2bdff9ad593879f47b45663ea64faf709878c9f0090] <==
	I0818 20:12:56.618845       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 20:12:56.628333       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 20:12:56.629906       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 20:12:56.637406       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 20:12:56.637640       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-291295_8f582d16-79a9-4289-8f9b-09fa7d6f7eb7!
	I0818 20:12:56.638187       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4d9b9ef5-1dd3-496c-8492-d6d91bae983c", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-291295_8f582d16-79a9-4289-8f9b-09fa7d6f7eb7 became leader
	I0818 20:12:56.738235       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-291295_8f582d16-79a9-4289-8f9b-09fa7d6f7eb7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291295 -n embed-certs-291295
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-291295 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-q9hsn
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-291295 describe pod metrics-server-6867b74b74-q9hsn
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-291295 describe pod metrics-server-6867b74b74-q9hsn: exit status 1 (58.606775ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-q9hsn" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-291295 describe pod metrics-server-6867b74b74-q9hsn: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (381.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (337.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-944426 -n no-preload-944426
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-18 20:28:22.783108966 +0000 UTC m=+6613.175448237
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-944426 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-944426 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.742µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-944426 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944426 -n no-preload-944426
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-944426 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-944426 logs -n 25: (2.110281305s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-944426             | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-868662                  | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-868662 --memory=2200 --alsologtostderr   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-291295            | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-868662 image list                           | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:02 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-852598  | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-247539        | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944426                  | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-291295                 | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:03 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-852598       | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-247539             | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:13 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:28 UTC | 18 Aug 24 20:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 20:04:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 20:04:42.787579   74485 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:42.787666   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787673   74485 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:42.787677   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787847   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:42.788352   74485 out.go:352] Setting JSON to false
	I0818 20:04:42.789201   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6427,"bootTime":1724005056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:42.789257   74485 start.go:139] virtualization: kvm guest
	I0818 20:04:42.791538   74485 out.go:177] * [default-k8s-diff-port-852598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:42.793185   74485 notify.go:220] Checking for updates...
	I0818 20:04:42.793204   74485 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:42.794555   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:42.795955   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:42.797158   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:42.798459   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:42.799775   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:42.801373   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:04:42.801763   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.801823   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.816564   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0818 20:04:42.816964   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.817465   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.817486   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.817807   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.818015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.818224   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:42.818511   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.818540   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.832964   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0818 20:04:42.833369   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.833866   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.833895   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.834252   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.834438   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.867522   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:42.868931   74485 start.go:297] selected driver: kvm2
	I0818 20:04:42.868948   74485 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.869074   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:42.869754   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.869835   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:42.884983   74485 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:42.885345   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:42.885408   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:04:42.885421   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:42.885450   74485 start.go:340] cluster config:
	{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.885567   74485 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.887511   74485 out.go:177] * Starting "default-k8s-diff-port-852598" primary control-plane node in "default-k8s-diff-port-852598" cluster
	I0818 20:04:42.011628   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:45.083629   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:42.888803   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:04:42.888828   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:42.888834   74485 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:42.888903   74485 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:42.888913   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 20:04:42.888991   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:04:42.889163   74485 start.go:360] acquireMachinesLock for default-k8s-diff-port-852598: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:04:51.163614   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:54.235770   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:00.315808   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:03.387719   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:09.467686   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:12.539667   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:18.619652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:21.691652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:27.771635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:30.843627   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:36.923644   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:39.995678   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:46.075611   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:49.147665   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:55.227683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:58.299638   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:04.379690   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:07.451735   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:13.531669   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:16.603729   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:22.683639   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:25.755659   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:31.835708   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:34.907693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:40.987635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:44.059673   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:50.139693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:53.211683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:59.291707   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:02.363660   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:08.443634   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:11.515633   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:17.595640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:20.667689   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:26.747640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:29.819663   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:32.823816   73815 start.go:364] duration metric: took 4m30.025550701s to acquireMachinesLock for "embed-certs-291295"
	I0818 20:07:32.823869   73815 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:32.823875   73815 fix.go:54] fixHost starting: 
	I0818 20:07:32.824270   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:32.824306   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:32.839755   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0818 20:07:32.840171   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:32.840614   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:07:32.840632   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:32.840962   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:32.841160   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:32.841303   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:07:32.842786   73815 fix.go:112] recreateIfNeeded on embed-certs-291295: state=Stopped err=<nil>
	I0818 20:07:32.842814   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	W0818 20:07:32.842974   73815 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:32.844743   73815 out.go:177] * Restarting existing kvm2 VM for "embed-certs-291295" ...
	I0818 20:07:32.821304   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:32.821364   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821657   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:07:32.821683   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821904   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:07:32.823683   73711 machine.go:96] duration metric: took 4m37.430465042s to provisionDockerMachine
	I0818 20:07:32.823720   73711 fix.go:56] duration metric: took 4m37.451071449s for fixHost
	I0818 20:07:32.823727   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 4m37.451091077s
	W0818 20:07:32.823754   73711 start.go:714] error starting host: provision: host is not running
	W0818 20:07:32.823846   73711 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0818 20:07:32.823855   73711 start.go:729] Will try again in 5 seconds ...
	I0818 20:07:32.846149   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Start
	I0818 20:07:32.846317   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring networks are active...
	I0818 20:07:32.847049   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network default is active
	I0818 20:07:32.847478   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network mk-embed-certs-291295 is active
	I0818 20:07:32.847854   73815 main.go:141] libmachine: (embed-certs-291295) Getting domain xml...
	I0818 20:07:32.848748   73815 main.go:141] libmachine: (embed-certs-291295) Creating domain...
	I0818 20:07:34.053380   73815 main.go:141] libmachine: (embed-certs-291295) Waiting to get IP...
	I0818 20:07:34.054322   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.054765   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.054850   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.054751   75081 retry.go:31] will retry after 299.809444ms: waiting for machine to come up
	I0818 20:07:34.356537   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.356955   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.357014   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.356932   75081 retry.go:31] will retry after 366.714086ms: waiting for machine to come up
	I0818 20:07:34.725440   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.725885   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.725915   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.725839   75081 retry.go:31] will retry after 427.074526ms: waiting for machine to come up
	I0818 20:07:35.154258   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.154660   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.154682   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.154633   75081 retry.go:31] will retry after 565.117984ms: waiting for machine to come up
	I0818 20:07:35.721302   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.721729   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.721757   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.721686   75081 retry.go:31] will retry after 630.987814ms: waiting for machine to come up
	I0818 20:07:36.354566   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:36.354981   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:36.355016   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:36.354951   75081 retry.go:31] will retry after 697.865559ms: waiting for machine to come up
	I0818 20:07:37.054868   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.055232   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.055260   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.055188   75081 retry.go:31] will retry after 898.995052ms: waiting for machine to come up
	I0818 20:07:37.824187   73711 start.go:360] acquireMachinesLock for no-preload-944426: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:37.955672   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.956089   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.956115   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.956038   75081 retry.go:31] will retry after 1.482185836s: waiting for machine to come up
	I0818 20:07:39.440488   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:39.440838   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:39.440889   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:39.440794   75081 retry.go:31] will retry after 1.695604547s: waiting for machine to come up
	I0818 20:07:41.138708   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:41.139203   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:41.139231   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:41.139166   75081 retry.go:31] will retry after 1.806916927s: waiting for machine to come up
	I0818 20:07:42.947942   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:42.948344   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:42.948402   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:42.948319   75081 retry.go:31] will retry after 2.664923271s: waiting for machine to come up
	I0818 20:07:45.616102   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:45.616454   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:45.616482   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:45.616411   75081 retry.go:31] will retry after 3.460207847s: waiting for machine to come up
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:49.078185   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078646   73815 main.go:141] libmachine: (embed-certs-291295) Found IP for machine: 192.168.39.125
	I0818 20:07:49.078676   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has current primary IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078682   73815 main.go:141] libmachine: (embed-certs-291295) Reserving static IP address...
	I0818 20:07:49.079061   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.079091   73815 main.go:141] libmachine: (embed-certs-291295) Reserved static IP address: 192.168.39.125
	I0818 20:07:49.079112   73815 main.go:141] libmachine: (embed-certs-291295) DBG | skip adding static IP to network mk-embed-certs-291295 - found existing host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"}
	I0818 20:07:49.079132   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Getting to WaitForSSH function...
	I0818 20:07:49.079148   73815 main.go:141] libmachine: (embed-certs-291295) Waiting for SSH to be available...
	I0818 20:07:49.081287   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081592   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.081645   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081761   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH client type: external
	I0818 20:07:49.081788   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa (-rw-------)
	I0818 20:07:49.081823   73815 main.go:141] libmachine: (embed-certs-291295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:07:49.081841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | About to run SSH command:
	I0818 20:07:49.081854   73815 main.go:141] libmachine: (embed-certs-291295) DBG | exit 0
	I0818 20:07:49.207649   73815 main.go:141] libmachine: (embed-certs-291295) DBG | SSH cmd err, output: <nil>: 
	I0818 20:07:49.208007   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetConfigRaw
	I0818 20:07:49.208604   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.211088   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211436   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.211464   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211685   73815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:07:49.211906   73815 machine.go:93] provisionDockerMachine start ...
	I0818 20:07:49.211932   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:49.212156   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.214381   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214696   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.214722   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214838   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.215001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215139   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215264   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.215402   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.215637   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.215650   73815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:07:49.327972   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:07:49.328001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328234   73815 buildroot.go:166] provisioning hostname "embed-certs-291295"
	I0818 20:07:49.328286   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328495   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.331272   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331667   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.331695   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331795   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.331967   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332124   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332235   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.332387   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.332602   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.332620   73815 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-291295 && echo "embed-certs-291295" | sudo tee /etc/hostname
	I0818 20:07:49.457656   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-291295
	
	I0818 20:07:49.457692   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.460362   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460692   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.460724   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460821   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.461040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461269   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461419   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.461593   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.461791   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.461807   73815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-291295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-291295/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-291295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:07:49.580418   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:49.580448   73815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:07:49.580487   73815 buildroot.go:174] setting up certificates
	I0818 20:07:49.580501   73815 provision.go:84] configureAuth start
	I0818 20:07:49.580513   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.580787   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.583435   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.583801   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.583825   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.584097   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.586253   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586572   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.586606   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586700   73815 provision.go:143] copyHostCerts
	I0818 20:07:49.586764   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:07:49.586786   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:07:49.586863   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:07:49.586984   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:07:49.586994   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:07:49.587034   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:07:49.587134   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:07:49.587144   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:07:49.587182   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:07:49.587257   73815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-291295 san=[127.0.0.1 192.168.39.125 embed-certs-291295 localhost minikube]
	I0818 20:07:49.844689   73815 provision.go:177] copyRemoteCerts
	I0818 20:07:49.844745   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:07:49.844767   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.847172   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.847517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847700   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.847898   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.848060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.848210   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:49.933798   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:07:49.957958   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:07:49.981551   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:07:50.004238   73815 provision.go:87] duration metric: took 423.726052ms to configureAuth
	I0818 20:07:50.004263   73815 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:07:50.004431   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:07:50.004494   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.006759   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007031   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.007059   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007217   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.007437   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007603   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007729   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.007894   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.008058   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.008072   73815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:07:50.287001   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:07:50.287027   73815 machine.go:96] duration metric: took 1.075103653s to provisionDockerMachine
	I0818 20:07:50.287038   73815 start.go:293] postStartSetup for "embed-certs-291295" (driver="kvm2")
	I0818 20:07:50.287047   73815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:07:50.287067   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.287451   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:07:50.287478   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.290150   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290493   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.290515   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290727   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.290911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.291096   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.291233   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.379621   73815 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:07:50.388749   73815 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:07:50.388772   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:07:50.388844   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:07:50.388927   73815 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:07:50.389046   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:07:50.398957   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:50.422817   73815 start.go:296] duration metric: took 135.767247ms for postStartSetup
	I0818 20:07:50.422859   73815 fix.go:56] duration metric: took 17.598982329s for fixHost
	I0818 20:07:50.422886   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.425514   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.425899   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.425926   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.426113   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.426332   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426505   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426623   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.426798   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.427018   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.427033   73815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:07:50.540087   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011670.500173623
	
	I0818 20:07:50.540113   73815 fix.go:216] guest clock: 1724011670.500173623
	I0818 20:07:50.540122   73815 fix.go:229] Guest: 2024-08-18 20:07:50.500173623 +0000 UTC Remote: 2024-08-18 20:07:50.42286401 +0000 UTC m=+287.764343419 (delta=77.309613ms)
	I0818 20:07:50.540140   73815 fix.go:200] guest clock delta is within tolerance: 77.309613ms
	I0818 20:07:50.540145   73815 start.go:83] releasing machines lock for "embed-certs-291295", held for 17.716293127s
	I0818 20:07:50.540172   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.540462   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:50.543280   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543688   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.543721   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544386   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544639   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544698   73815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:07:50.544749   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.544889   73815 ssh_runner.go:195] Run: cat /version.json
	I0818 20:07:50.544913   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.547481   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547813   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.547841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547867   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547962   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548165   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548281   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.548307   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.548340   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548431   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548515   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.548576   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548701   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548874   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.628660   73815 ssh_runner.go:195] Run: systemctl --version
	I0818 20:07:50.653164   73815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:07:50.799158   73815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:07:50.805063   73815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:07:50.805134   73815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:07:50.820796   73815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:07:50.820822   73815 start.go:495] detecting cgroup driver to use...
	I0818 20:07:50.820901   73815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:07:50.837574   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:07:50.851913   73815 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:07:50.851981   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:07:50.865595   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:07:50.879240   73815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:07:50.990057   73815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:07:51.151540   73815 docker.go:233] disabling docker service ...
	I0818 20:07:51.151618   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:07:51.166231   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:07:51.180949   73815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:07:51.329174   73815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:07:51.460564   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:07:51.474929   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:07:51.494510   73815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:07:51.494573   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.507465   73815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:07:51.507533   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.519207   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.535742   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.551186   73815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:07:51.563233   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.574714   73815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.597948   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.609883   73815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:07:51.621040   73815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:07:51.621115   73815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:07:51.636305   73815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:07:51.646895   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:51.781890   73815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:07:51.927722   73815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:07:51.927799   73815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:07:51.932918   73815 start.go:563] Will wait 60s for crictl version
	I0818 20:07:51.933006   73815 ssh_runner.go:195] Run: which crictl
	I0818 20:07:51.936917   73815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:07:51.981063   73815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:07:51.981141   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.008566   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.041182   73815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:07:52.042348   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:52.045196   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045559   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:52.045588   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045764   73815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 20:07:52.050188   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:52.065105   73815 kubeadm.go:883] updating cluster {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:07:52.065244   73815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:07:52.065300   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:52.108608   73815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:07:52.108687   73815 ssh_runner.go:195] Run: which lz4
	I0818 20:07:52.112897   73815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:07:52.117388   73815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:07:52.117421   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:53.532527   73815 crio.go:462] duration metric: took 1.419668609s to copy over tarball
	I0818 20:07:53.532605   73815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:07:55.664780   73815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132141788s)
	I0818 20:07:55.664810   73815 crio.go:469] duration metric: took 2.132257968s to extract the tarball
	I0818 20:07:55.664820   73815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:07:55.702662   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:55.745782   73815 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:07:55.745801   73815 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:07:55.745809   73815 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0818 20:07:55.745921   73815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-291295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:07:55.745985   73815 ssh_runner.go:195] Run: crio config
	I0818 20:07:55.788458   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:07:55.788484   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:07:55.788503   73815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:07:55.788537   73815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-291295 NodeName:embed-certs-291295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:07:55.788723   73815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-291295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:07:55.788800   73815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:07:55.798787   73815 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:07:55.798860   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:07:55.808532   73815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0818 20:07:55.825731   73815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:07:55.842287   73815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0818 20:07:55.860058   73815 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0818 20:07:55.864007   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:55.876297   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:55.999076   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:07:56.015305   73815 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295 for IP: 192.168.39.125
	I0818 20:07:56.015325   73815 certs.go:194] generating shared ca certs ...
	I0818 20:07:56.015339   73815 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:07:56.015505   73815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:07:56.015548   73815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:07:56.015557   73815 certs.go:256] generating profile certs ...
	I0818 20:07:56.015633   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/client.key
	I0818 20:07:56.015689   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key.a8bddcfe
	I0818 20:07:56.015732   73815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key
	I0818 20:07:56.015846   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:07:56.015885   73815 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:07:56.015898   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:07:56.015953   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:07:56.015979   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:07:56.015999   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:07:56.016036   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:56.016660   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:07:56.044323   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:07:56.079231   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:07:56.111738   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:07:56.134817   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0818 20:07:56.160819   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:07:56.185806   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:07:56.210116   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:07:56.234185   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:07:56.256896   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:07:56.279505   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:07:56.302178   73815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:07:56.318931   73815 ssh_runner.go:195] Run: openssl version
	I0818 20:07:56.324865   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:07:56.336272   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340825   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340872   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.346515   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:07:56.357471   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:07:56.368211   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372600   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372662   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.378152   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:07:56.388868   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:07:56.399297   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403628   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403663   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.409041   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:07:56.419342   73815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:07:56.423757   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:07:56.429341   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:07:56.435012   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:07:56.440752   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:07:56.446305   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:07:56.452219   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:07:56.458004   73815 kubeadm.go:392] StartCluster: {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:07:56.458133   73815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:07:56.458181   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.495200   73815 cri.go:89] found id: ""
	I0818 20:07:56.495281   73815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:07:56.505834   73815 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:07:56.505854   73815 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:07:56.505903   73815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:07:56.516025   73815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:07:56.516962   73815 kubeconfig.go:125] found "embed-certs-291295" server: "https://192.168.39.125:8443"
	I0818 20:07:56.518789   73815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:07:56.528513   73815 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0818 20:07:56.528541   73815 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:07:56.528556   73815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:07:56.528612   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.568091   73815 cri.go:89] found id: ""
	I0818 20:07:56.568161   73815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:07:56.584012   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:07:56.593697   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:07:56.593712   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:07:56.593746   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:07:56.603071   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:07:56.603112   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:07:56.612422   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:07:56.621194   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:07:56.621243   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:07:56.630252   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.640086   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:07:56.640138   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.649323   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:07:56.658055   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:07:56.658110   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:07:56.667134   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:07:56.676460   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.783806   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.515850   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:07:57.710065   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.780213   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.854365   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:07:57.854458   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.355246   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.854602   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.355211   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.854991   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.354593   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.368818   73815 api_server.go:72] duration metric: took 2.514473789s to wait for apiserver process to appear ...
	I0818 20:08:00.368844   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:00.368866   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.832413   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:02.832449   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:02.832466   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.924768   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.924804   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:02.924820   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.929839   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.929869   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.369350   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.373766   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.373796   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.869333   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.874889   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.874919   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:04.369187   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:04.374739   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:08:04.383736   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:04.383764   73815 api_server.go:131] duration metric: took 4.014913233s to wait for apiserver health ...
	I0818 20:08:04.383773   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:08:04.383779   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:04.385486   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:04.386800   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:04.403476   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:04.422354   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:04.435181   73815 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:04.435222   73815 system_pods.go:61] "coredns-6f6b679f8f-wvd9k" [02369649-1565-437d-8b19-a67adfe13d45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:04.435237   73815 system_pods.go:61] "etcd-embed-certs-291295" [1e9f0b7d-bb65-4867-821e-b9af34338b3e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:04.435246   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [bb884a00-e058-4348-bc6a-427c64f4c68d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:04.435261   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [3a359998-cdb6-46ef-a018-e03e70cb33e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:04.435269   73815 system_pods.go:61] "kube-proxy-5fjm2" [bb15b1d9-8221-473a-b0c7-8c65b3b18bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 20:08:04.435276   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [4ed7725a-b0e6-4bc0-b0bd-913eb15fd4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:04.435287   73815 system_pods.go:61] "metrics-server-6867b74b74-g2kt7" [c23cc238-51f0-402c-a0c1-4aecc020d845] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:04.435294   73815 system_pods.go:61] "storage-provisioner" [2dcad3a1-15f0-41b9-8398-5a6e2d8763b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 20:08:04.435303   73815 system_pods.go:74] duration metric: took 12.928394ms to wait for pod list to return data ...
	I0818 20:08:04.435314   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:04.439127   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:04.439150   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:04.439161   73815 node_conditions.go:105] duration metric: took 3.84281ms to run NodePressure ...
	I0818 20:08:04.439176   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:04.720705   73815 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726814   73815 kubeadm.go:739] kubelet initialised
	I0818 20:08:04.726835   73815 kubeadm.go:740] duration metric: took 6.104356ms waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726843   73815 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:04.736000   73815 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.741473   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741509   73815 pod_ready.go:82] duration metric: took 5.472852ms for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.741523   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741534   73815 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.749841   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749872   73815 pod_ready.go:82] duration metric: took 8.326743ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.749883   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749891   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.756947   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.756997   73815 pod_ready.go:82] duration metric: took 7.079861ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.757011   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.757019   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.825829   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825865   73815 pod_ready.go:82] duration metric: took 68.834734ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.825878   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825888   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225761   73815 pod_ready.go:93] pod "kube-proxy-5fjm2" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:05.225786   73815 pod_ready.go:82] duration metric: took 399.888138ms for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225796   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:07.232250   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.744305   74485 start.go:364] duration metric: took 3m27.85511004s to acquireMachinesLock for "default-k8s-diff-port-852598"
	I0818 20:08:10.744365   74485 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:10.744384   74485 fix.go:54] fixHost starting: 
	I0818 20:08:10.744751   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:10.744791   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:10.764317   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0818 20:08:10.764799   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:10.765323   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:08:10.765349   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:10.765723   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:10.765929   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:10.766110   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:08:10.767735   74485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-852598: state=Stopped err=<nil>
	I0818 20:08:10.767763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	W0818 20:08:10.767931   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:10.770197   74485 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-852598" ...
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:09.234086   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:11.732954   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.771461   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Start
	I0818 20:08:10.771638   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring networks are active...
	I0818 20:08:10.772332   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network default is active
	I0818 20:08:10.772645   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network mk-default-k8s-diff-port-852598 is active
	I0818 20:08:10.773119   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Getting domain xml...
	I0818 20:08:10.773840   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Creating domain...
	I0818 20:08:12.058765   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting to get IP...
	I0818 20:08:12.059745   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060236   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.060152   75353 retry.go:31] will retry after 227.793826ms: waiting for machine to come up
	I0818 20:08:12.289622   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290038   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290061   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.290013   75353 retry.go:31] will retry after 288.501286ms: waiting for machine to come up
	I0818 20:08:12.580672   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581158   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.581120   75353 retry.go:31] will retry after 460.489481ms: waiting for machine to come up
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:13.736819   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:14.732761   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:14.732783   73815 pod_ready.go:82] duration metric: took 9.506980075s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:14.732792   73815 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:16.739855   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:13.042839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043444   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043475   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.043413   75353 retry.go:31] will retry after 542.076458ms: waiting for machine to come up
	I0818 20:08:13.586675   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587296   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587326   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.587216   75353 retry.go:31] will retry after 553.588704ms: waiting for machine to come up
	I0818 20:08:14.142076   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142714   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142737   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.142616   75353 retry.go:31] will retry after 852.179264ms: waiting for machine to come up
	I0818 20:08:14.996732   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997226   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997258   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.997175   75353 retry.go:31] will retry after 732.180291ms: waiting for machine to come up
	I0818 20:08:15.731247   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731741   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731771   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:15.731699   75353 retry.go:31] will retry after 1.456328641s: waiting for machine to come up
	I0818 20:08:17.189586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190071   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:17.189997   75353 retry.go:31] will retry after 1.632315907s: waiting for machine to come up
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:18.740885   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:21.239526   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:18.824458   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824827   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824859   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:18.824772   75353 retry.go:31] will retry after 2.077122736s: waiting for machine to come up
	I0818 20:08:20.903734   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904176   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904203   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:20.904139   75353 retry.go:31] will retry after 1.975638775s: waiting for machine to come up
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.239765   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:25.739127   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:22.882020   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882511   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:22.882450   75353 retry.go:31] will retry after 3.362090127s: waiting for machine to come up
	I0818 20:08:26.246148   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246523   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246547   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:26.246479   75353 retry.go:31] will retry after 3.188423251s: waiting for machine to come up
	I0818 20:08:30.732227   73711 start.go:364] duration metric: took 52.90798246s to acquireMachinesLock for "no-preload-944426"
	I0818 20:08:30.732291   73711 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:30.732302   73711 fix.go:54] fixHost starting: 
	I0818 20:08:30.732702   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:30.732738   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:30.749873   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0818 20:08:30.750371   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:30.750922   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:08:30.750951   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:30.751323   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:30.751547   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:30.751748   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:08:30.753437   73711 fix.go:112] recreateIfNeeded on no-preload-944426: state=Stopped err=<nil>
	I0818 20:08:30.753460   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	W0818 20:08:30.753623   73711 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:30.756026   73711 out.go:177] * Restarting existing kvm2 VM for "no-preload-944426" ...
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.438706   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439209   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Found IP for machine: 192.168.72.111
	I0818 20:08:29.439225   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserving static IP address...
	I0818 20:08:29.439241   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has current primary IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439712   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.439740   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | skip adding static IP to network mk-default-k8s-diff-port-852598 - found existing host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"}
	I0818 20:08:29.439754   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserved static IP address: 192.168.72.111
	I0818 20:08:29.439769   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for SSH to be available...
	I0818 20:08:29.439786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Getting to WaitForSSH function...
	I0818 20:08:29.442039   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442351   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.442378   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442515   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH client type: external
	I0818 20:08:29.442545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa (-rw-------)
	I0818 20:08:29.442569   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:29.442580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | About to run SSH command:
	I0818 20:08:29.442592   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | exit 0
	I0818 20:08:29.567586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:29.567935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetConfigRaw
	I0818 20:08:29.568553   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.570763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571150   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.571183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571367   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:08:29.571585   74485 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:29.571608   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:29.571839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.574102   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574560   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.574598   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574753   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.574920   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575219   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.575421   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.575610   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.575623   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:29.683677   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:29.683705   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.683980   74485 buildroot.go:166] provisioning hostname "default-k8s-diff-port-852598"
	I0818 20:08:29.684010   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.684210   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.687062   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687490   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.687518   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687656   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.687817   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.687954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.688105   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.688270   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.688444   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.688457   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-852598 && echo "default-k8s-diff-port-852598" | sudo tee /etc/hostname
	I0818 20:08:29.810790   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-852598
	
	I0818 20:08:29.810821   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.813448   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.813868   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.814159   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814322   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814457   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.814613   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.814821   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.814847   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-852598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-852598/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-852598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:29.934730   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:29.934762   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:29.934818   74485 buildroot.go:174] setting up certificates
	I0818 20:08:29.934834   74485 provision.go:84] configureAuth start
	I0818 20:08:29.934848   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.935133   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.938004   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938365   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.938385   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938612   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.940910   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941267   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.941298   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941376   74485 provision.go:143] copyHostCerts
	I0818 20:08:29.941429   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:29.941446   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:29.941498   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:29.941583   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:29.941591   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:29.941609   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:29.941657   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:29.941664   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:29.941683   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:29.941726   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-852598 san=[127.0.0.1 192.168.72.111 default-k8s-diff-port-852598 localhost minikube]
	I0818 20:08:30.047223   74485 provision.go:177] copyRemoteCerts
	I0818 20:08:30.047284   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:30.047310   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.049891   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050165   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.050195   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050394   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.050580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.050750   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.050910   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.133873   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:30.158887   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0818 20:08:30.183930   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 20:08:30.208851   74485 provision.go:87] duration metric: took 274.002401ms to configureAuth
	I0818 20:08:30.208888   74485 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:30.209075   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:30.209144   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.211913   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212274   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.212305   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212521   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.212718   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.212897   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.213060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.213313   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.213531   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.213564   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:30.490496   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:30.490524   74485 machine.go:96] duration metric: took 918.924484ms to provisionDockerMachine
	I0818 20:08:30.490541   74485 start.go:293] postStartSetup for "default-k8s-diff-port-852598" (driver="kvm2")
	I0818 20:08:30.490555   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:30.490576   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.490879   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:30.490904   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.493538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.493863   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.493894   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.494015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.494211   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.494367   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.494513   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.582020   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:30.586488   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:30.586510   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:30.586568   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:30.586656   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:30.586743   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:30.595907   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:30.619808   74485 start.go:296] duration metric: took 129.254668ms for postStartSetup
	I0818 20:08:30.619842   74485 fix.go:56] duration metric: took 19.875457987s for fixHost
	I0818 20:08:30.619861   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.622487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622802   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.622836   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.623181   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623338   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623489   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.623663   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.623819   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.623829   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:30.732011   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011710.692571104
	
	I0818 20:08:30.732033   74485 fix.go:216] guest clock: 1724011710.692571104
	I0818 20:08:30.732040   74485 fix.go:229] Guest: 2024-08-18 20:08:30.692571104 +0000 UTC Remote: 2024-08-18 20:08:30.619845545 +0000 UTC m=+227.865652589 (delta=72.725559ms)
	I0818 20:08:30.732088   74485 fix.go:200] guest clock delta is within tolerance: 72.725559ms
	I0818 20:08:30.732098   74485 start.go:83] releasing machines lock for "default-k8s-diff-port-852598", held for 19.987759602s
	I0818 20:08:30.732126   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.732380   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:30.735249   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735696   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.735724   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735987   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736665   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736886   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736961   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:30.737002   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.737212   74485 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:30.737240   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.740016   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740246   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740447   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740470   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740646   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740650   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740739   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740949   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740956   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741415   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741427   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741608   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.741699   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.821128   74485 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:30.848919   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:30.997885   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:31.004578   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:31.004656   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:31.023770   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:31.023801   74485 start.go:495] detecting cgroup driver to use...
	I0818 20:08:31.023873   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:31.040507   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:31.054848   74485 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:31.054901   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:31.069584   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:31.089532   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:31.214560   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:31.394507   74485 docker.go:233] disabling docker service ...
	I0818 20:08:31.394571   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:31.411295   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:31.427312   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:31.547148   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:31.669942   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:31.686214   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:31.711412   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:31.711474   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.723281   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:31.723346   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.735488   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.748029   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.762456   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:31.779045   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.793816   74485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.816892   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.829236   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:31.842943   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:31.843000   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:31.858422   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:31.870179   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:32.003783   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:32.160300   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:32.160368   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:32.165424   74485 start.go:563] Will wait 60s for crictl version
	I0818 20:08:32.165472   74485 ssh_runner.go:195] Run: which crictl
	I0818 20:08:32.169268   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:32.211667   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:32.211758   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.242366   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.272343   74485 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:27.739698   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:30.239242   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.240089   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.273652   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:32.277017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277362   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:32.277395   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277654   74485 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:32.282225   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:32.306870   74485 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:32.306980   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:32.307040   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:32.350393   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:32.350473   74485 ssh_runner.go:195] Run: which lz4
	I0818 20:08:32.355129   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:32.359816   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:32.359839   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:08:30.757329   73711 main.go:141] libmachine: (no-preload-944426) Calling .Start
	I0818 20:08:30.757514   73711 main.go:141] libmachine: (no-preload-944426) Ensuring networks are active...
	I0818 20:08:30.758286   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network default is active
	I0818 20:08:30.758667   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network mk-no-preload-944426 is active
	I0818 20:08:30.759084   73711 main.go:141] libmachine: (no-preload-944426) Getting domain xml...
	I0818 20:08:30.759889   73711 main.go:141] libmachine: (no-preload-944426) Creating domain...
	I0818 20:08:32.064235   73711 main.go:141] libmachine: (no-preload-944426) Waiting to get IP...
	I0818 20:08:32.065149   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.065617   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.065693   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.065614   75550 retry.go:31] will retry after 223.046315ms: waiting for machine to come up
	I0818 20:08:32.290000   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.290486   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.290517   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.290460   75550 retry.go:31] will retry after 359.595476ms: waiting for machine to come up
	I0818 20:08:32.652293   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.652922   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.652953   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.652891   75550 retry.go:31] will retry after 355.131428ms: waiting for machine to come up
	I0818 20:08:33.009174   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.009664   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.009692   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.009620   75550 retry.go:31] will retry after 433.765107ms: waiting for machine to come up
	I0818 20:08:33.445297   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.446028   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.446057   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.446005   75550 retry.go:31] will retry after 547.853366ms: waiting for machine to come up
	I0818 20:08:33.995808   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.996537   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.996569   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.996500   75550 retry.go:31] will retry after 830.882652ms: waiting for machine to come up
	I0818 20:08:34.828636   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:34.829139   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:34.829169   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:34.829088   75550 retry.go:31] will retry after 1.034176215s: waiting for machine to come up
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.240826   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:36.740440   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:33.831827   74485 crio.go:462] duration metric: took 1.476738272s to copy over tarball
	I0818 20:08:33.831892   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:36.080107   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.24818669s)
	I0818 20:08:36.080141   74485 crio.go:469] duration metric: took 2.248285769s to extract the tarball
	I0818 20:08:36.080159   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:36.120912   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:36.170431   74485 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:08:36.170455   74485 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:08:36.170463   74485 kubeadm.go:934] updating node { 192.168.72.111 8444 v1.31.0 crio true true} ...
	I0818 20:08:36.170563   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-852598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:36.170628   74485 ssh_runner.go:195] Run: crio config
	I0818 20:08:36.215464   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:36.215491   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:36.215504   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:36.215528   74485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-852598 NodeName:default-k8s-diff-port-852598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:08:36.215652   74485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-852598"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:36.215718   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:08:36.227163   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:36.227254   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:36.237577   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0818 20:08:36.254898   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:36.273530   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0818 20:08:36.290824   74485 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:36.294542   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:36.306822   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:36.443673   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:36.461205   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598 for IP: 192.168.72.111
	I0818 20:08:36.461232   74485 certs.go:194] generating shared ca certs ...
	I0818 20:08:36.461252   74485 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:36.461420   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:36.461492   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:36.461505   74485 certs.go:256] generating profile certs ...
	I0818 20:08:36.461621   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/client.key
	I0818 20:08:36.461717   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key.44a0f5ad
	I0818 20:08:36.461783   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key
	I0818 20:08:36.461930   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:36.461983   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:36.461998   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:36.462026   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:36.462077   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:36.462112   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:36.462167   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:36.462916   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:36.512610   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:36.558616   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:36.595755   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:36.638264   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 20:08:36.669336   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:08:36.692480   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:36.717235   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:08:36.742220   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:36.765505   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:36.789279   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:36.813777   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:36.831256   74485 ssh_runner.go:195] Run: openssl version
	I0818 20:08:36.837184   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:36.848123   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853030   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853089   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.859016   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:36.871084   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:36.882581   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.888943   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.889008   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.896841   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:36.911762   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:36.923029   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.927982   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.928039   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.934165   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:36.946794   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:36.951686   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:36.957905   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:36.964071   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:36.970369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:36.976369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:36.982386   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:36.988286   74485 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:36.988382   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:36.988433   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.036383   74485 cri.go:89] found id: ""
	I0818 20:08:37.036472   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:37.047135   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:37.047159   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:37.047204   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:37.058133   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:37.059236   74485 kubeconfig.go:125] found "default-k8s-diff-port-852598" server: "https://192.168.72.111:8444"
	I0818 20:08:37.061368   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:37.072922   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0818 20:08:37.072961   74485 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:37.072975   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:37.073035   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.120622   74485 cri.go:89] found id: ""
	I0818 20:08:37.120713   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:37.138564   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:37.149091   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:37.149114   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:37.149167   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:08:37.160298   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:37.160364   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:37.170717   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:08:37.180261   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:37.180337   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:37.190466   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.200331   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:37.200407   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.210729   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:08:37.220302   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:37.220379   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:37.230616   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:37.241303   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:37.365964   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:35.865644   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:35.866148   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:35.866176   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:35.866094   75550 retry.go:31] will retry after 1.30047863s: waiting for machine to come up
	I0818 20:08:37.168446   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:37.168947   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:37.168985   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:37.168886   75550 retry.go:31] will retry after 1.143148547s: waiting for machine to come up
	I0818 20:08:38.314142   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:38.314622   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:38.314645   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:38.314568   75550 retry.go:31] will retry after 2.106630797s: waiting for machine to come up
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.240817   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:41.741780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:38.322305   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.523945   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.627637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.794218   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:38.794298   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.295075   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.795095   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.810749   74485 api_server.go:72] duration metric: took 1.016560665s to wait for apiserver process to appear ...
	I0818 20:08:39.810778   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:39.810802   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:39.811324   74485 api_server.go:269] stopped: https://192.168.72.111:8444/healthz: Get "https://192.168.72.111:8444/healthz": dial tcp 192.168.72.111:8444: connect: connection refused
	I0818 20:08:40.311081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.309160   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:42.309190   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:42.309206   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.364083   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.364123   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:42.364148   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.370890   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.370918   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:40.423364   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:40.423886   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:40.423909   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:40.423851   75550 retry.go:31] will retry after 2.350918177s: waiting for machine to come up
	I0818 20:08:42.776801   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:42.777407   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:42.777440   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:42.777361   75550 retry.go:31] will retry after 3.529824243s: waiting for machine to come up
	I0818 20:08:42.815322   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.823702   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.823738   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.311540   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.317503   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.317537   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.810955   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.816976   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.817005   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.311718   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.316009   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.316038   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.811634   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.816069   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.816095   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.311732   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.317099   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:45.317122   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.811063   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.815319   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:08:45.821699   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:45.821728   74485 api_server.go:131] duration metric: took 6.010942001s to wait for apiserver health ...
	I0818 20:08:45.821739   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:45.821774   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:45.823968   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.239827   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:46.240539   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:45.825235   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:45.836398   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:45.854746   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:45.866305   74485 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:45.866335   74485 system_pods.go:61] "coredns-6f6b679f8f-zfdn9" [8ed412a0-912d-4619-a2d8-2378f921037b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:45.866344   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [efa18356-f8dd-4fe4-acc6-59f859e7becf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:45.866351   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [b92f2056-c5b6-4a2f-8519-a83b2350866f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:45.866359   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [7eb6a474-891d-442e-bd85-4ca766312f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:45.866365   74485 system_pods.go:61] "kube-proxy-h8bpj" [472e231d-df71-44d6-8873-23d7e43d43d2] Running
	I0818 20:08:45.866375   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [43dccb14-0125-4d48-9537-8a87c865b586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:45.866381   74485 system_pods.go:61] "metrics-server-6867b74b74-brqj6" [de1c0894-2b42-4728-bf63-bea36c5aa0d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:45.866387   74485 system_pods.go:61] "storage-provisioner" [41499d9e-d3cf-4dbc-9464-998a1f2c6186] Running
	I0818 20:08:45.866395   74485 system_pods.go:74] duration metric: took 11.62616ms to wait for pod list to return data ...
	I0818 20:08:45.866411   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:45.870540   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:45.870564   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:45.870578   74485 node_conditions.go:105] duration metric: took 4.15805ms to run NodePressure ...
	I0818 20:08:45.870597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:46.138555   74485 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142738   74485 kubeadm.go:739] kubelet initialised
	I0818 20:08:46.142758   74485 kubeadm.go:740] duration metric: took 4.173219ms waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142765   74485 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:46.147199   74485 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.151726   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151751   74485 pod_ready.go:82] duration metric: took 4.528706ms for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.151762   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151770   74485 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.155962   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.155984   74485 pod_ready.go:82] duration metric: took 4.203038ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.155996   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.156002   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.159739   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159759   74485 pod_ready.go:82] duration metric: took 3.749616ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.159769   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159777   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.309056   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:46.309441   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:46.309470   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:46.309395   75550 retry.go:31] will retry after 3.741295193s: waiting for machine to come up
	I0818 20:08:50.052617   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053049   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has current primary IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053070   73711 main.go:141] libmachine: (no-preload-944426) Found IP for machine: 192.168.61.228
	I0818 20:08:50.053083   73711 main.go:141] libmachine: (no-preload-944426) Reserving static IP address...
	I0818 20:08:50.053446   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.053467   73711 main.go:141] libmachine: (no-preload-944426) Reserved static IP address: 192.168.61.228
	I0818 20:08:50.053484   73711 main.go:141] libmachine: (no-preload-944426) DBG | skip adding static IP to network mk-no-preload-944426 - found existing host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"}
	I0818 20:08:50.053498   73711 main.go:141] libmachine: (no-preload-944426) DBG | Getting to WaitForSSH function...
	I0818 20:08:50.053510   73711 main.go:141] libmachine: (no-preload-944426) Waiting for SSH to be available...
	I0818 20:08:50.055459   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055790   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.055822   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055911   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH client type: external
	I0818 20:08:50.055939   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa (-rw-------)
	I0818 20:08:50.055971   73711 main.go:141] libmachine: (no-preload-944426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:50.055986   73711 main.go:141] libmachine: (no-preload-944426) DBG | About to run SSH command:
	I0818 20:08:50.055998   73711 main.go:141] libmachine: (no-preload-944426) DBG | exit 0
	I0818 20:08:50.175717   73711 main.go:141] libmachine: (no-preload-944426) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:50.176077   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetConfigRaw
	I0818 20:08:50.176705   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.179072   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179455   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.179486   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179712   73711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:08:50.179900   73711 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:50.179923   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:50.180128   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.182300   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182679   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.182707   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182822   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.183009   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183138   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183292   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.183455   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.183613   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.183623   73711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.739015   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.741282   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:48.165270   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.166500   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:52.667585   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.284037   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:50.284069   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284354   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:08:50.284383   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284503   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.287412   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287774   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.287814   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287965   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.288164   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288352   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288509   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.288669   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.288869   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.288889   73711 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944426 && echo "no-preload-944426" | sudo tee /etc/hostname
	I0818 20:08:50.407844   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944426
	
	I0818 20:08:50.407877   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.410740   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411115   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.411156   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411402   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.411612   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411760   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411869   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.412073   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.412277   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.412299   73711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944426/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:50.521359   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:50.521388   73711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:50.521456   73711 buildroot.go:174] setting up certificates
	I0818 20:08:50.521467   73711 provision.go:84] configureAuth start
	I0818 20:08:50.521481   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.521824   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.524572   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.524975   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.525002   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.525211   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.527350   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527669   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.527697   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527790   73711 provision.go:143] copyHostCerts
	I0818 20:08:50.527856   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:50.527872   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:50.527924   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:50.528038   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:50.528047   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:50.528065   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:50.528119   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:50.528126   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:50.528143   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:50.528192   73711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.no-preload-944426 san=[127.0.0.1 192.168.61.228 localhost minikube no-preload-944426]
	I0818 20:08:50.740892   73711 provision.go:177] copyRemoteCerts
	I0818 20:08:50.740964   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:50.740991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.743676   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744029   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.744059   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744260   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.744494   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.744681   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.744848   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:50.826364   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:50.858459   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:08:50.890910   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:50.918703   73711 provision.go:87] duration metric: took 397.222917ms to configureAuth
	I0818 20:08:50.918730   73711 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:50.918947   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:50.919029   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.922219   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922549   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.922573   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922762   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.922991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923166   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923300   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.923475   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.923683   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.923700   73711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:51.193561   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:51.193588   73711 machine.go:96] duration metric: took 1.013672792s to provisionDockerMachine
	I0818 20:08:51.193603   73711 start.go:293] postStartSetup for "no-preload-944426" (driver="kvm2")
	I0818 20:08:51.193616   73711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:51.193660   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.194032   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:51.194060   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.196422   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196712   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.196747   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196900   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.197046   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.197157   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.197325   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.279007   73711 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:51.283324   73711 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:51.283344   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:51.283424   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:51.283524   73711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:51.283641   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:51.293489   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:51.317415   73711 start.go:296] duration metric: took 123.797891ms for postStartSetup
	I0818 20:08:51.317455   73711 fix.go:56] duration metric: took 20.58515233s for fixHost
	I0818 20:08:51.317479   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.320161   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320452   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.320481   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320667   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.320853   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321027   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321171   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.321322   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:51.321505   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:51.321517   73711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:51.420193   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011731.395088538
	
	I0818 20:08:51.420216   73711 fix.go:216] guest clock: 1724011731.395088538
	I0818 20:08:51.420223   73711 fix.go:229] Guest: 2024-08-18 20:08:51.395088538 +0000 UTC Remote: 2024-08-18 20:08:51.317459873 +0000 UTC m=+356.082724848 (delta=77.628665ms)
	I0818 20:08:51.420240   73711 fix.go:200] guest clock delta is within tolerance: 77.628665ms
	I0818 20:08:51.420256   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 20.687989837s
	I0818 20:08:51.420273   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.420534   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:51.423567   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.423861   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.423888   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.424052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424528   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424690   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424777   73711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:51.424825   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.424916   73711 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:51.424945   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.427482   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427714   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427786   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.427813   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427962   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428080   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.428109   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.428146   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428283   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428342   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428441   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428532   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.428600   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428707   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.528038   73711 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:51.534231   73711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:51.683823   73711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:51.690823   73711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:51.690901   73711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:51.707356   73711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:51.707389   73711 start.go:495] detecting cgroup driver to use...
	I0818 20:08:51.707459   73711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:51.723884   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:51.737661   73711 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:51.737715   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:51.751187   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:51.764367   73711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:51.881664   73711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:52.022183   73711 docker.go:233] disabling docker service ...
	I0818 20:08:52.022250   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:52.037108   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:52.050404   73711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:52.190167   73711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:52.325569   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:52.339546   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:52.358427   73711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:52.358487   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.369570   73711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:52.369629   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.382786   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.396845   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.407797   73711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:52.418649   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.428822   73711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.445799   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.455730   73711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:52.464898   73711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:52.464951   73711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:52.477249   73711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:52.487204   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:52.608922   73711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:52.753849   73711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:52.753918   73711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:52.759116   73711 start.go:563] Will wait 60s for crictl version
	I0818 20:08:52.759175   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:52.763674   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:52.806016   73711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:52.806106   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.833670   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.864310   73711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:52.865447   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:52.868265   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868667   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:52.868699   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868900   73711 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:52.873656   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:52.887328   73711 kubeadm.go:883] updating cluster {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:52.887505   73711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:52.887553   73711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:52.923999   73711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:52.924025   73711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:52.924090   73711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.924097   73711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.924113   73711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.924147   73711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.924216   73711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.924239   73711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:52.924305   73711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.924390   73711 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.925984   73711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.926002   73711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.925994   73711 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 20:08:52.926011   73711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.926053   73711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.926291   73711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.117679   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157566   73711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0818 20:08:53.157608   73711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157655   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.158464   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.161938   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217317   73711 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0818 20:08:53.217374   73711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.217419   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217427   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.229954   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0818 20:08:53.253154   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.253209   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.261450   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.269598   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.270354   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.270401   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.421994   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 20:08:53.422048   73711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0818 20:08:53.422139   73711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.422182   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.422195   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.422052   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.446061   73711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0818 20:08:53.446101   73711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.446100   73711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0818 20:08:53.446114   73711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0818 20:08:53.446158   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446201   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.446161   73711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.446130   73711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.446250   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446280   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.474921   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.474936   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0818 20:08:53.474953   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474995   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474999   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.505782   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.505904   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.505934   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.799739   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.740857   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.167350   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.168652   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.666744   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.666779   74485 pod_ready.go:82] duration metric: took 11.506987195s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.666802   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671280   74485 pod_ready.go:93] pod "kube-proxy-h8bpj" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.671302   74485 pod_ready.go:82] duration metric: took 4.49242ms for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671311   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675745   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.675765   74485 pod_ready.go:82] duration metric: took 4.446707ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675779   74485 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:55.497054   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.022032642s)
	I0818 20:08:55.497090   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0818 20:08:55.497116   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.022155942s)
	I0818 20:08:55.497157   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.022131358s)
	I0818 20:08:55.497168   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0818 20:08:55.497227   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.497273   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.497313   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.991355489s)
	I0818 20:08:55.497274   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.991406662s)
	I0818 20:08:55.497362   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.497369   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.497393   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (1.991466215s)
	I0818 20:08:55.497409   73711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.697646009s)
	I0818 20:08:55.497439   73711 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0818 20:08:55.497455   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:55.497468   73711 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.497504   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:55.590490   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.608567   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.608583   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.608658   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 20:08:55.608707   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.608728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0818 20:08:55.608741   73711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.608756   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:55.608768   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.660747   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 20:08:55.660856   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:08:55.701347   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 20:08:55.701376   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.701433   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:08:55.717056   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 20:08:55.717159   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:08:59.680640   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.071854332s)
	I0818 20:08:59.680673   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0818 20:08:59.680700   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (4.071919945s)
	I0818 20:08:59.680728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0818 20:08:59.680739   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680755   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (4.019877135s)
	I0818 20:08:59.680781   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0818 20:08:59.680792   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.97939667s)
	I0818 20:08:59.680802   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680818   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (3.979373996s)
	I0818 20:08:59.680833   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0818 20:08:59.680847   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:59.680876   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (3.96370085s)
	I0818 20:08:59.680895   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.241463   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:00.241492   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:59.683057   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:02.183113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:01.753708   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.072881673s)
	I0818 20:09:01.753739   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.072859667s)
	I0818 20:09:01.753786   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 20:09:01.753747   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0818 20:09:01.753866   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:01.753870   73711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:01.753922   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:03.515107   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.761161853s)
	I0818 20:09:03.515136   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0818 20:09:03.515142   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.761255334s)
	I0818 20:09:03.515162   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:03.515170   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0818 20:09:03.515223   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.741235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.910002   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.239901   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.682962   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.183678   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:05.463531   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.948279133s)
	I0818 20:09:05.463559   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0818 20:09:05.463585   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:05.463629   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:07.525332   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.061676855s)
	I0818 20:09:07.525365   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0818 20:09:07.525401   73711 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:07.525473   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:08.178855   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 20:09:08.178894   73711 cache_images.go:123] Successfully loaded all cached images
	I0818 20:09:08.178900   73711 cache_images.go:92] duration metric: took 15.254860831s to LoadCachedImages
	I0818 20:09:08.178915   73711 kubeadm.go:934] updating node { 192.168.61.228 8443 v1.31.0 crio true true} ...
	I0818 20:09:08.179070   73711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:09:08.179163   73711 ssh_runner.go:195] Run: crio config
	I0818 20:09:08.229392   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:08.229418   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:08.229429   73711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:09:08.229453   73711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.228 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944426 NodeName:no-preload-944426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:09:08.229598   73711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944426"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:09:08.229657   73711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:09:08.240023   73711 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:09:08.240121   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:09:08.249808   73711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 20:09:08.266663   73711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:09:08.284042   73711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0818 20:09:08.302210   73711 ssh_runner.go:195] Run: grep 192.168.61.228	control-plane.minikube.internal$ /etc/hosts
	I0818 20:09:08.306321   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:09:08.318674   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:08.437701   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:08.462861   73711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426 for IP: 192.168.61.228
	I0818 20:09:08.462889   73711 certs.go:194] generating shared ca certs ...
	I0818 20:09:08.462909   73711 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:08.463099   73711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:09:08.463166   73711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:09:08.463178   73711 certs.go:256] generating profile certs ...
	I0818 20:09:08.463297   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/client.key
	I0818 20:09:08.463400   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key.ec9e396f
	I0818 20:09:08.463459   73711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key
	I0818 20:09:08.463622   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:09:08.463663   73711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:09:08.463676   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:09:08.463718   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:09:08.463748   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:09:08.463780   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:09:08.463827   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:09:08.464500   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:09:08.497860   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:09:08.550536   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:09:08.593972   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:09:08.625691   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 20:09:08.652285   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:09:08.676175   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:09:08.703870   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:09:08.729102   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:09:08.758017   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:09:08.783528   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:09:08.808211   73711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:09:08.825465   73711 ssh_runner.go:195] Run: openssl version
	I0818 20:09:08.831856   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:09:08.843336   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847774   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847824   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.854110   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:09:08.865279   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:09:08.876107   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880723   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880786   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.886526   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:09:08.898139   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:09:08.909258   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.913957   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.914015   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.919888   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:09:08.933118   73711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:09:08.937979   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:09:08.944427   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:09:08.950686   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:09:08.956949   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:09:08.963201   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:09:08.969284   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:09:08.975411   73711 kubeadm.go:392] StartCluster: {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:09:08.975501   73711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:09:08.975543   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.019794   73711 cri.go:89] found id: ""
	I0818 20:09:09.019859   73711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:09:09.030614   73711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:09:09.030635   73711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:09:09.030689   73711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:09:09.041513   73711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:09:09.042532   73711 kubeconfig.go:125] found "no-preload-944426" server: "https://192.168.61.228:8443"
	I0818 20:09:09.044606   73711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:09:09.054823   73711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.228
	I0818 20:09:09.054855   73711 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:09:09.054867   73711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:09:09.054919   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.096324   73711 cri.go:89] found id: ""
	I0818 20:09:09.096412   73711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:09:09.112752   73711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:09:09.122515   73711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:09:09.122537   73711 kubeadm.go:157] found existing configuration files:
	
	I0818 20:09:09.122578   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:09:09.131551   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:09:09.131604   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:09:09.140888   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:09:09.149865   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:09:09.149920   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:09:09.159008   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.168220   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:09:09.168279   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.177638   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:09:09.187508   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:09:09.187567   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:09:09.196657   73711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:09:09.206117   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:09.331465   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.242594   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.738983   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:09.682305   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.683106   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:10.574796   73711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243293266s)
	I0818 20:09:10.574822   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.778850   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.843088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.931752   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:09:10.931846   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.432245   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.932577   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.948423   73711 api_server.go:72] duration metric: took 1.016687944s to wait for apiserver process to appear ...
	I0818 20:09:11.948449   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:09:11.948477   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:11.948946   73711 api_server.go:269] stopped: https://192.168.61.228:8443/healthz: Get "https://192.168.61.228:8443/healthz": dial tcp 192.168.61.228:8443: connect: connection refused
	I0818 20:09:12.448725   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.739963   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.739993   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.740010   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.750388   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.750411   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.948679   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.956174   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:14.956205   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.449273   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.453840   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.453870   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:15.949138   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.958790   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.958813   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:16.449521   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:16.453975   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:09:16.460298   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:09:16.460323   73711 api_server.go:131] duration metric: took 4.511867816s to wait for apiserver health ...
	I0818 20:09:16.460330   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:16.460339   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:16.462141   73711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:09:13.740020   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.238126   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:13.683910   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.182408   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.463457   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:09:16.474867   73711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:09:16.494479   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:09:16.502870   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:09:16.502898   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:09:16.502906   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:09:16.502917   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:09:16.502926   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:09:16.502937   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:09:16.502951   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:09:16.502959   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:09:16.502964   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:09:16.502970   73711 system_pods.go:74] duration metric: took 8.468743ms to wait for pod list to return data ...
	I0818 20:09:16.502977   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:09:16.507863   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:09:16.507884   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:09:16.507893   73711 node_conditions.go:105] duration metric: took 4.912203ms to run NodePressure ...
	I0818 20:09:16.507907   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:16.779765   73711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790746   73711 kubeadm.go:739] kubelet initialised
	I0818 20:09:16.790771   73711 kubeadm.go:740] duration metric: took 10.982299ms waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790780   73711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:16.799544   73711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.806805   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806826   73711 pod_ready.go:82] duration metric: took 7.251632ms for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.806835   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806841   73711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.813614   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813646   73711 pod_ready.go:82] duration metric: took 6.794013ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.813656   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813664   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.818982   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819016   73711 pod_ready.go:82] duration metric: took 5.338981ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.819028   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819037   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.898401   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898433   73711 pod_ready.go:82] duration metric: took 79.37927ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.898446   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898454   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.297663   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297697   73711 pod_ready.go:82] duration metric: took 399.23365ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.297706   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297712   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.697884   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697909   73711 pod_ready.go:82] duration metric: took 400.191092ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.697919   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697925   73711 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:18.099008   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099034   73711 pod_ready.go:82] duration metric: took 401.09908ms for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:18.099044   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099050   73711 pod_ready.go:39] duration metric: took 1.30825923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:18.099071   73711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:09:18.111862   73711 ops.go:34] apiserver oom_adj: -16
	I0818 20:09:18.111888   73711 kubeadm.go:597] duration metric: took 9.081245207s to restartPrimaryControlPlane
	I0818 20:09:18.111901   73711 kubeadm.go:394] duration metric: took 9.136525478s to StartCluster
	I0818 20:09:18.111931   73711 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.112017   73711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:09:18.114460   73711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.114771   73711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:09:18.114885   73711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:09:18.114987   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:09:18.115022   73711 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944426"
	I0818 20:09:18.115036   73711 addons.go:69] Setting default-storageclass=true in profile "no-preload-944426"
	I0818 20:09:18.115059   73711 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944426"
	I0818 20:09:18.115075   73711 addons.go:69] Setting metrics-server=true in profile "no-preload-944426"
	W0818 20:09:18.115082   73711 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:09:18.115095   73711 addons.go:234] Setting addon metrics-server=true in "no-preload-944426"
	I0818 20:09:18.115067   73711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944426"
	W0818 20:09:18.115104   73711 addons.go:243] addon metrics-server should already be in state true
	I0818 20:09:18.115122   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115132   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115517   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115530   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115541   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115553   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115560   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115592   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.117511   73711 out.go:177] * Verifying Kubernetes components...
	I0818 20:09:18.118740   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:18.133596   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0818 20:09:18.134093   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.134661   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.134685   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.135066   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.135263   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.136138   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0818 20:09:18.136520   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.136981   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.137004   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.137353   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.137911   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.137957   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.138952   73711 addons.go:234] Setting addon default-storageclass=true in "no-preload-944426"
	W0818 20:09:18.138975   73711 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:09:18.139001   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.139356   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.139413   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.155618   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0818 20:09:18.156076   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.156666   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.156687   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.157086   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.157669   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.157700   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.158080   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0818 20:09:18.158422   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.158850   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.158868   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.158888   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0818 20:09:18.159237   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.159282   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.159455   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.159741   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.159763   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.160108   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.160582   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.160606   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.165108   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.166977   73711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:09:18.168139   73711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.168156   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:09:18.168174   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.171426   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172004   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.172041   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172082   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.172238   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.172336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.172423   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.175961   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0818 20:09:18.176421   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.176543   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0818 20:09:18.176861   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.176875   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.177065   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.177176   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.177345   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.177745   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.177762   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.178162   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.178336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.179445   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180238   73711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.180253   73711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:09:18.180275   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.181198   73711 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:09:18.182420   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:09:18.182447   73711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:09:18.182464   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.183457   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183499   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.183513   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183656   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.183820   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.183953   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.184112   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.185260   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185575   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.185588   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185754   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.185879   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.186013   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.186099   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.338778   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:18.356229   73711 node_ready.go:35] waiting up to 6m0s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:18.496927   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:09:18.496949   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:09:18.513205   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.540482   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:09:18.540505   73711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:09:18.544078   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.613315   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:18.613340   73711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:09:18.668416   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:19.638171   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094064475s)
	I0818 20:09:19.638274   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638299   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638177   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124933278s)
	I0818 20:09:19.638328   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638343   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638281   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638412   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638697   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638714   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638724   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638732   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638825   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638845   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638853   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638932   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638946   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638966   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638994   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639006   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638893   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.639016   73711 addons.go:475] Verifying addon metrics-server=true in "no-preload-944426"
	I0818 20:09:19.639024   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.639227   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.639401   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639416   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.640889   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.640905   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.640973   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647148   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.647169   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.647416   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.647460   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647448   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.649397   73711 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0818 20:09:19.650643   73711 addons.go:510] duration metric: took 1.535758897s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:18.240003   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.738804   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:18.184836   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.682274   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.360319   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:22.860498   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:22.739478   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.238989   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.239357   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:23.181992   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.182417   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.183071   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.360413   73711 node_ready.go:49] node "no-preload-944426" has status "Ready":"True"
	I0818 20:09:25.360449   73711 node_ready.go:38] duration metric: took 7.004187421s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:25.360462   73711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:25.366498   73711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:27.373766   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.873098   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:29.738977   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.238945   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.683136   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.182825   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:31.873262   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.372828   73711 pod_ready.go:93] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.372849   73711 pod_ready.go:82] duration metric: took 7.006326702s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.372858   73711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376709   73711 pod_ready.go:93] pod "etcd-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.376732   73711 pod_ready.go:82] duration metric: took 3.867173ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376743   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380703   73711 pod_ready.go:93] pod "kube-apiserver-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.380722   73711 pod_ready.go:82] duration metric: took 3.970732ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380733   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385137   73711 pod_ready.go:93] pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.385159   73711 pod_ready.go:82] duration metric: took 4.417483ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385171   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390646   73711 pod_ready.go:93] pod "kube-proxy-2l6g8" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.390702   73711 pod_ready.go:82] duration metric: took 5.522399ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390713   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772352   73711 pod_ready.go:93] pod "kube-scheduler-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.772374   73711 pod_ready.go:82] duration metric: took 381.654122ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772384   73711 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:34.779615   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:34.239095   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.738310   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:34.683291   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:37.181676   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.780369   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.278364   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:38.740139   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.238400   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.181867   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.182275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.781697   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:43.239274   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.739280   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.182744   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.684210   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:46.278261   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.279139   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:47.739502   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.239148   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.181581   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.181939   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.182295   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.779145   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.779195   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.779440   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:52.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:55.239445   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.682549   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.182480   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.278685   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.279456   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.739205   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.739780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:02.240023   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.182638   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.182974   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.778759   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:04.279122   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.240223   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.738829   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:03.183498   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:05.681527   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.682456   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.281289   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:08.778838   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:09.238297   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:11.240258   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.182214   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:12.182485   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.778909   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.279849   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:13.739554   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.240247   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:14.182841   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.682427   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:15.778555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:17.779315   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:18.739993   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.241320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:19.182059   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.681310   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:20.278905   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.779594   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:23.739354   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.739406   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:23.682161   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.683137   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.279240   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:27.778736   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.238417   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.240302   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:28.182371   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.182431   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.182538   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.277705   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.778467   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.739086   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:35.239575   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.682089   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:36.682424   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.277613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:39.778047   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:37.743430   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.240404   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:38.682667   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.182697   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.779091   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.780368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:42.739140   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:44.739349   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.739985   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.681897   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:45.682347   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.682385   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.278368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:48.777847   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:49.238888   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:51.239699   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.182672   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.683492   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.778683   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.778843   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:53.240375   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.738235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.181390   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.181940   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.278430   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.278961   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.778449   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:57.740046   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:00.239797   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.182402   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.182516   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.779677   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.277808   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.740702   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:07.239660   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:03.682270   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.682964   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:06.278085   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.781247   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:09.240113   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.739320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.182148   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:10.681873   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:12.682490   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.278330   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:13.278868   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:14.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:16.739103   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.688049   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.779050   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.779324   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:19.238499   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:21.239793   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.181477   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.181514   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.278360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.779285   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:23.739816   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.243360   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:24.182434   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.182980   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:25.277558   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.278088   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:29.278655   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:28.739673   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.238716   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:28.681620   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:30.682755   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:32.682969   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.778900   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.240237   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.739311   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.182357   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:37.682275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.278357   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:38.279370   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:38.239229   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.240416   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:39.682587   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.187237   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.779225   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.779359   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.779661   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:42.739642   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.240529   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.681898   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:46.683048   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:47.283360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.780428   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:47.739118   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.740073   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.238919   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.182997   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.682384   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.279233   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:54.779470   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.239638   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:56.240123   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:53.682822   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:55.683248   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.279035   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:59.779371   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:58.240182   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.240387   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:58.182085   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.682855   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:02.278506   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:04.279952   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:02.739623   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.239503   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.240563   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:03.182703   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.183099   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.682942   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:06.780051   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:09.283183   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:09.738372   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.738978   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:10.183297   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:12.682617   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.778895   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.779613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:14.239433   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:14.733714   73815 pod_ready.go:82] duration metric: took 4m0.000909376s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:14.733756   73815 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:14.733773   73815 pod_ready.go:39] duration metric: took 4m10.006922238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:14.733798   73815 kubeadm.go:597] duration metric: took 4m18.227938977s to restartPrimaryControlPlane
	W0818 20:12:14.733854   73815 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:14.733884   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:15.182539   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:17.682113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.278810   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:18.279513   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:19.682712   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.182462   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:20.777741   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.779468   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:24.785394   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:24.682001   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.182491   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.278406   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.279272   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.682104   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:32.181795   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:31.779163   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:33.782706   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:34.183088   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.682409   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.278136   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:38.278938   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.943045   73815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.209137834s)
	I0818 20:12:40.943131   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:40.961902   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:40.984956   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:41.000828   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:41.000855   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:41.000908   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:41.019730   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:41.019782   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:41.031694   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:41.052082   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:41.052133   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:41.061682   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.070983   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:41.071036   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.083122   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:41.092977   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:41.093041   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:41.103081   73815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:41.155300   73815 kubeadm.go:310] W0818 20:12:41.112032    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.156131   73815 kubeadm.go:310] W0818 20:12:41.113028    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.270071   73815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:39.183290   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:41.682301   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.777979   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:42.779754   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:44.779992   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:43.683501   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:46.181489   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.616338   73815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:12:49.616432   73815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:12:49.616546   73815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:12:49.616675   73815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:12:49.616784   73815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:12:49.616877   73815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:12:49.618287   73815 out.go:235]   - Generating certificates and keys ...
	I0818 20:12:49.618354   73815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:12:49.618414   73815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:12:49.618486   73815 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:12:49.618537   73815 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:12:49.618598   73815 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:12:49.618648   73815 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:12:49.618700   73815 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:12:49.618779   73815 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:12:49.618892   73815 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:12:49.619007   73815 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:12:49.619065   73815 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:12:49.619163   73815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:12:49.619214   73815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:12:49.619269   73815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:12:49.619331   73815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:12:49.619436   73815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:12:49.619486   73815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:12:49.619556   73815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:12:49.619619   73815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:12:49.621003   73815 out.go:235]   - Booting up control plane ...
	I0818 20:12:49.621109   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:12:49.621195   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:12:49.621272   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:12:49.621380   73815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:12:49.621464   73815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:12:49.621507   73815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:12:49.621621   73815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:12:49.621715   73815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:12:49.621773   73815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.427168ms
	I0818 20:12:49.621843   73815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:12:49.621894   73815 kubeadm.go:310] [api-check] The API server is healthy after 5.00297116s
	I0818 20:12:49.621989   73815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:12:49.622127   73815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:12:49.622192   73815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:12:49.622366   73815 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-291295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:12:49.622416   73815 kubeadm.go:310] [bootstrap-token] Using token: y7e2le.i0q1jk5v0c0u0zuw
	I0818 20:12:49.623896   73815 out.go:235]   - Configuring RBAC rules ...
	I0818 20:12:49.623979   73815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:12:49.624091   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:12:49.624245   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:12:49.624354   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:12:49.624455   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:12:49.624526   73815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:12:49.624621   73815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:12:49.624675   73815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:12:49.624718   73815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:12:49.624724   73815 kubeadm.go:310] 
	I0818 20:12:49.624819   73815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:12:49.624835   73815 kubeadm.go:310] 
	I0818 20:12:49.624933   73815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:12:49.624943   73815 kubeadm.go:310] 
	I0818 20:12:49.624975   73815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:12:49.625066   73815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:12:49.625122   73815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:12:49.625135   73815 kubeadm.go:310] 
	I0818 20:12:49.625210   73815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:12:49.625217   73815 kubeadm.go:310] 
	I0818 20:12:49.625285   73815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:12:49.625295   73815 kubeadm.go:310] 
	I0818 20:12:49.625364   73815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:12:49.625469   73815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:12:49.625552   73815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:12:49.625563   73815 kubeadm.go:310] 
	I0818 20:12:49.625675   73815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:12:49.625756   73815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:12:49.625763   73815 kubeadm.go:310] 
	I0818 20:12:49.625858   73815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.625943   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:12:49.625967   73815 kubeadm.go:310] 	--control-plane 
	I0818 20:12:49.625976   73815 kubeadm.go:310] 
	I0818 20:12:49.626089   73815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:12:49.626099   73815 kubeadm.go:310] 
	I0818 20:12:49.626196   73815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.626293   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:12:49.626302   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:12:49.626308   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:12:49.627714   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:12:47.280266   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.779502   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.628998   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:12:49.639640   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:12:49.657017   73815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-291295 minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=embed-certs-291295 minikube.k8s.io/primary=true
	I0818 20:12:49.685420   73815 ops.go:34] apiserver oom_adj: -16
	I0818 20:12:49.868146   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.368174   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.868256   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.368427   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.868632   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:52.368585   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:48.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:50.681743   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.683179   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.869122   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.368635   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.869162   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.368223   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.490893   73815 kubeadm.go:1113] duration metric: took 4.833865719s to wait for elevateKubeSystemPrivileges
	I0818 20:12:54.490919   73815 kubeadm.go:394] duration metric: took 4m58.032922921s to StartCluster
	I0818 20:12:54.490936   73815 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.491011   73815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:12:54.492769   73815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.493007   73815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:12:54.493069   73815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:12:54.493160   73815 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-291295"
	I0818 20:12:54.493186   73815 addons.go:69] Setting default-storageclass=true in profile "embed-certs-291295"
	I0818 20:12:54.493208   73815 addons.go:69] Setting metrics-server=true in profile "embed-certs-291295"
	I0818 20:12:54.493226   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:12:54.493234   73815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-291295"
	I0818 20:12:54.493250   73815 addons.go:234] Setting addon metrics-server=true in "embed-certs-291295"
	W0818 20:12:54.493263   73815 addons.go:243] addon metrics-server should already be in state true
	I0818 20:12:54.493293   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493197   73815 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-291295"
	W0818 20:12:54.493423   73815 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:12:54.493454   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493667   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493695   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493799   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493824   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493839   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493856   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.494988   73815 out.go:177] * Verifying Kubernetes components...
	I0818 20:12:54.496631   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0818 20:12:54.510362   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0818 20:12:54.510861   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510893   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510904   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.511362   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511394   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511392   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511411   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511512   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511532   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511721   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511770   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511858   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.512040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.512246   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512269   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512275   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.512287   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.515662   73815 addons.go:234] Setting addon default-storageclass=true in "embed-certs-291295"
	W0818 20:12:54.515684   73815 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:12:54.515713   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.516066   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.516113   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.532752   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0818 20:12:54.532798   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0818 20:12:54.533454   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.533570   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.534099   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534122   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534237   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534256   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534374   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534590   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534626   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.534665   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0818 20:12:54.534909   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.535373   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.535793   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.535808   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.536326   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.536411   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.536941   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.538860   73815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:12:54.538862   73815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:12:52.279487   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.279652   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.539061   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.539290   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.540006   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:12:54.540024   73815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:12:54.540043   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.540104   73815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.540119   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:12:54.540144   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.543782   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544017   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544131   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544154   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544293   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544565   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.544734   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.544754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544887   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.545060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.545257   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.545502   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.558292   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0818 20:12:54.558721   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.559184   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.559200   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.559579   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.559764   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.561412   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.562138   73815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.562153   73815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:12:54.562169   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.565078   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565524   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.565543   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565782   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.565954   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.566107   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.566265   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.738286   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:12:54.804581   73815 node_ready.go:35] waiting up to 6m0s for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813953   73815 node_ready.go:49] node "embed-certs-291295" has status "Ready":"True"
	I0818 20:12:54.813984   73815 node_ready.go:38] duration metric: took 9.367719ms for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813995   73815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:54.820670   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:12:54.884787   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:12:54.884808   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:12:54.891500   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.917894   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.939854   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:12:54.939873   73815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:12:55.023663   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:55.023684   73815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:12:55.049846   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:56.106099   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.188173933s)
	I0818 20:12:56.106164   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106173   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106502   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106504   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.106519   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.106529   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106537   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106774   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106788   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107412   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21588373s)
	I0818 20:12:56.107447   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107459   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.107656   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.107729   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.107739   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107747   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.108054   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.108095   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.108105   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.163788   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.163816   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.164087   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.164137   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239269   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189381338s)
	I0818 20:12:56.239327   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239341   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.239712   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.239767   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239748   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.239782   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239792   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.240000   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.240017   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.240028   73815 addons.go:475] Verifying addon metrics-server=true in "embed-certs-291295"
	I0818 20:12:56.241750   73815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:12:56.243157   73815 addons.go:510] duration metric: took 1.750082977s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:12:56.827912   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:55.184449   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:57.676039   74485 pod_ready.go:82] duration metric: took 4m0.000245975s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:57.676064   74485 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:57.676106   74485 pod_ready.go:39] duration metric: took 4m11.533331444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:57.676138   74485 kubeadm.go:597] duration metric: took 4m20.628972956s to restartPrimaryControlPlane
	W0818 20:12:57.676203   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:57.676230   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:56.778171   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:58.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:59.328683   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.331560   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.281134   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.281507   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.828543   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.828572   73815 pod_ready.go:82] duration metric: took 9.007869564s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.828586   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833396   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.833416   73815 pod_ready.go:82] duration metric: took 4.823533ms for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833426   73815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837837   73815 pod_ready.go:93] pod "etcd-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.837856   73815 pod_ready.go:82] duration metric: took 4.422926ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837864   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842646   73815 pod_ready.go:93] pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.842666   73815 pod_ready.go:82] duration metric: took 4.795789ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842675   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846697   73815 pod_ready.go:93] pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.846721   73815 pod_ready.go:82] duration metric: took 4.038999ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846733   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224066   73815 pod_ready.go:93] pod "kube-proxy-8mv85" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.224088   73815 pod_ready.go:82] duration metric: took 377.347897ms for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224097   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624310   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.624337   73815 pod_ready.go:82] duration metric: took 400.233574ms for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624347   73815 pod_ready.go:39] duration metric: took 9.810340936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:04.624363   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:04.624440   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:04.640514   73815 api_server.go:72] duration metric: took 10.147475745s to wait for apiserver process to appear ...
	I0818 20:13:04.640543   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:04.640565   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:13:04.646120   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:13:04.646969   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:04.646989   73815 api_server.go:131] duration metric: took 6.438722ms to wait for apiserver health ...
	I0818 20:13:04.646999   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:04.828347   73815 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:04.828385   73815 system_pods.go:61] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:04.828393   73815 system_pods.go:61] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:04.828398   73815 system_pods.go:61] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:04.828403   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:04.828416   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:04.828420   73815 system_pods.go:61] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:04.828425   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:04.828434   73815 system_pods.go:61] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:04.828441   73815 system_pods.go:61] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:04.828453   73815 system_pods.go:74] duration metric: took 181.44906ms to wait for pod list to return data ...
	I0818 20:13:04.828465   73815 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:05.030945   73815 default_sa.go:45] found service account: "default"
	I0818 20:13:05.030971   73815 default_sa.go:55] duration metric: took 202.497269ms for default service account to be created ...
	I0818 20:13:05.030981   73815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:05.226724   73815 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:05.226760   73815 system_pods.go:89] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:05.226769   73815 system_pods.go:89] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:05.226775   73815 system_pods.go:89] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:05.226781   73815 system_pods.go:89] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:05.226790   73815 system_pods.go:89] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:05.226795   73815 system_pods.go:89] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:05.226801   73815 system_pods.go:89] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:05.226810   73815 system_pods.go:89] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:05.226820   73815 system_pods.go:89] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:05.226831   73815 system_pods.go:126] duration metric: took 195.843628ms to wait for k8s-apps to be running ...
	I0818 20:13:05.226843   73815 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:05.226892   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:05.242656   73815 system_svc.go:56] duration metric: took 15.80684ms WaitForService to wait for kubelet
	I0818 20:13:05.242681   73815 kubeadm.go:582] duration metric: took 10.749648174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:05.242698   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:05.424616   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:05.424642   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:05.424654   73815 node_conditions.go:105] duration metric: took 181.951421ms to run NodePressure ...
	I0818 20:13:05.424668   73815 start.go:241] waiting for startup goroutines ...
	I0818 20:13:05.424678   73815 start.go:246] waiting for cluster config update ...
	I0818 20:13:05.424692   73815 start.go:255] writing updated cluster config ...
	I0818 20:13:05.425003   73815 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:05.470859   73815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:05.472909   73815 out.go:177] * Done! kubectl is now configured to use "embed-certs-291295" cluster and "default" namespace by default
	I0818 20:13:05.779555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:07.783567   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:10.281617   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:12.780570   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:15.282024   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:17.779399   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:23.788389   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.112134895s)
	I0818 20:13:23.788470   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:23.808611   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:13:23.820139   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:13:23.837253   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:13:23.837282   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:13:23.837345   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:13:23.848522   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:13:23.848595   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:13:23.857891   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:13:23.866756   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:13:23.866814   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:13:23.876332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.885435   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:13:23.885535   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.896120   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:13:23.905471   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:13:23.905565   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:13:23.915157   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:13:23.963756   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:13:23.963830   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:13:24.083423   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:13:24.083592   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:13:24.083733   74485 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:13:24.097967   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:13:24.099859   74485 out.go:235]   - Generating certificates and keys ...
	I0818 20:13:24.099926   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:13:24.100020   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:13:24.100125   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:13:24.100212   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:13:24.100310   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:13:24.100389   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:13:24.100476   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:13:24.100592   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:13:24.100711   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:13:24.100829   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:13:24.100891   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:13:24.100978   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:13:24.298737   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:13:24.592511   74485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:13:24.686316   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:13:24.796124   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:13:24.910646   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:13:24.911060   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:13:24.913486   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:13:20.281479   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:22.779269   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:24.914894   74485 out.go:235]   - Booting up control plane ...
	I0818 20:13:24.915018   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:13:24.915106   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:13:24.915303   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:13:24.938289   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:13:24.944304   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:13:24.944367   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:13:25.078685   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:13:25.078813   74485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:13:25.580725   74485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092954ms
	I0818 20:13:25.580847   74485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:13:25.280695   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:27.285875   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:29.779058   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:30.583574   74485 kubeadm.go:310] [api-check] The API server is healthy after 5.001121585s
	I0818 20:13:30.596453   74485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:13:30.616459   74485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:13:30.647753   74485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:13:30.648063   74485 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-852598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:13:30.661702   74485 kubeadm.go:310] [bootstrap-token] Using token: zx02gp.uvda3nvhhfc3i2l5
	I0818 20:13:30.663166   74485 out.go:235]   - Configuring RBAC rules ...
	I0818 20:13:30.663321   74485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:13:30.671440   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:13:30.682462   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:13:30.690376   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:13:30.699091   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:13:30.704304   74485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:13:30.989576   74485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:13:31.435191   74485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:13:31.989155   74485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:13:31.991090   74485 kubeadm.go:310] 
	I0818 20:13:31.991172   74485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:13:31.991188   74485 kubeadm.go:310] 
	I0818 20:13:31.991285   74485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:13:31.991303   74485 kubeadm.go:310] 
	I0818 20:13:31.991337   74485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:13:31.991506   74485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:13:31.991584   74485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:13:31.991605   74485 kubeadm.go:310] 
	I0818 20:13:31.991710   74485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:13:31.991732   74485 kubeadm.go:310] 
	I0818 20:13:31.991802   74485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:13:31.991814   74485 kubeadm.go:310] 
	I0818 20:13:31.991881   74485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:13:31.991986   74485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:13:31.992101   74485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:13:31.992132   74485 kubeadm.go:310] 
	I0818 20:13:31.992250   74485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:13:31.992345   74485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:13:31.992358   74485 kubeadm.go:310] 
	I0818 20:13:31.992464   74485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.992601   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:13:31.992637   74485 kubeadm.go:310] 	--control-plane 
	I0818 20:13:31.992650   74485 kubeadm.go:310] 
	I0818 20:13:31.992760   74485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:13:31.992778   74485 kubeadm.go:310] 
	I0818 20:13:31.992882   74485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.993030   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:13:31.994898   74485 kubeadm.go:310] W0818 20:13:23.918436    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995217   74485 kubeadm.go:310] W0818 20:13:23.919152    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995365   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:13:31.995413   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:13:31.995423   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:13:31.997188   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:13:31.998506   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:13:32.011472   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:13:32.031405   74485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:13:32.031449   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.031494   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-852598 minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=default-k8s-diff-port-852598 minikube.k8s.io/primary=true
	I0818 20:13:32.244997   74485 ops.go:34] apiserver oom_adj: -16
	I0818 20:13:32.245096   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.745775   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.279538   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:32.779152   73711 pod_ready.go:82] duration metric: took 4m0.006755386s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:13:32.779180   73711 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 20:13:32.779190   73711 pod_ready.go:39] duration metric: took 4m7.418715902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:32.779207   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:32.779240   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:32.779298   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:32.848109   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:32.848132   73711 cri.go:89] found id: ""
	I0818 20:13:32.848141   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:32.848201   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.852725   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:32.852789   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:32.899932   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:32.899957   73711 cri.go:89] found id: ""
	I0818 20:13:32.899969   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:32.900028   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.904698   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:32.904771   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:32.945320   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:32.945347   73711 cri.go:89] found id: ""
	I0818 20:13:32.945355   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:32.945411   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.949873   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:32.949935   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:32.986388   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:32.986409   73711 cri.go:89] found id: ""
	I0818 20:13:32.986415   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:32.986465   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.992213   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:32.992292   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:33.035535   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:33.035557   73711 cri.go:89] found id: ""
	I0818 20:13:33.035564   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:33.035622   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.039933   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:33.040006   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:33.077372   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:33.077395   73711 cri.go:89] found id: ""
	I0818 20:13:33.077404   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:33.077468   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.082254   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:33.082327   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:33.120142   73711 cri.go:89] found id: ""
	I0818 20:13:33.120181   73711 logs.go:276] 0 containers: []
	W0818 20:13:33.120192   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:33.120199   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:33.120267   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:33.159065   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:33.159089   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.159095   73711 cri.go:89] found id: ""
	I0818 20:13:33.159104   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:33.159164   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.163366   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.167301   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:33.167327   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:33.207982   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:33.208012   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:33.734525   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:33.734563   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:33.779286   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:33.779334   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:33.915330   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:33.915365   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:33.930057   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:33.930088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:33.978282   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:33.978312   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:34.021464   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:34.021495   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:34.058242   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:34.058271   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:34.094203   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:34.094231   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:34.157812   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:34.157849   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:34.196259   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:34.196288   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:34.273774   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:34.273818   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.245388   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:33.745166   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.245920   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.745548   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.245436   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.745269   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.245383   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.384146   74485 kubeadm.go:1113] duration metric: took 4.352781371s to wait for elevateKubeSystemPrivileges
	I0818 20:13:36.384182   74485 kubeadm.go:394] duration metric: took 4m59.395903283s to StartCluster
	I0818 20:13:36.384199   74485 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.384286   74485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:13:36.385964   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.386201   74485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:13:36.386320   74485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:13:36.386400   74485 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386423   74485 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386440   74485 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386458   74485 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386470   74485 addons.go:243] addon metrics-server should already be in state true
	I0818 20:13:36.386477   74485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-852598"
	I0818 20:13:36.386514   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386434   74485 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386567   74485 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:13:36.386612   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386435   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:13:36.386858   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386887   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386915   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386948   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386982   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.387015   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.387748   74485 out.go:177] * Verifying Kubernetes components...
	I0818 20:13:36.389177   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:13:36.402895   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0818 20:13:36.402928   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0818 20:13:36.403477   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.403479   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404111   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404120   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404519   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404525   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404795   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.405161   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.405192   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.405739   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0818 20:13:36.406246   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.406753   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.406779   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.407167   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.407726   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.407771   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.408687   74485 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.408710   74485 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:13:36.408736   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.409073   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.409120   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.423471   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 20:13:36.423953   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.424569   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.424588   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.424652   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0818 20:13:36.424966   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.425039   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.425257   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.425447   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.425462   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.425911   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.426098   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.427104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.427772   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.428108   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0818 20:13:36.428438   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.428794   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.428816   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.429092   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.429645   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.429696   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.429708   74485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:13:36.429758   74485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:13:36.431859   74485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.431879   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:13:36.431898   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.431958   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:13:36.431969   74485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:13:36.431983   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.435295   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435730   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.435757   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436192   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.436238   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.436254   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.436312   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.436528   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436570   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.436890   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.437171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.437355   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.447762   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0818 20:13:36.448303   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.448694   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.448713   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.449011   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.449160   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.450722   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.450918   74485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.450935   74485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:13:36.450954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.453529   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.453969   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.453992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.454163   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.454862   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.455104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.455246   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.606178   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:13:36.628852   74485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702927   74485 node_ready.go:49] node "default-k8s-diff-port-852598" has status "Ready":"True"
	I0818 20:13:36.702956   74485 node_ready.go:38] duration metric: took 74.077289ms for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702968   74485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:36.713446   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:36.726670   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:13:36.726689   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:13:36.741673   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.784451   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.790772   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:13:36.790798   74485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:13:36.845289   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:36.845315   74485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:13:36.914259   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:37.542511   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542538   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542559   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542543   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542874   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542914   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542922   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542932   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542941   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542953   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542963   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542971   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.543114   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.543123   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545016   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.545041   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545059   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572618   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.572643   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.572953   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572976   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.572989   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.793891   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.793918   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794436   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.794453   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794467   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794479   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.794487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794747   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794762   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794774   74485 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-852598"
	I0818 20:13:37.796423   74485 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:13:36.814874   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:36.838208   73711 api_server.go:72] duration metric: took 4m18.723396382s to wait for apiserver process to appear ...
	I0818 20:13:36.838234   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:36.838276   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:36.838334   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:36.890010   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:36.890036   73711 cri.go:89] found id: ""
	I0818 20:13:36.890046   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:36.890108   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.895675   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:36.895753   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:36.953110   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:36.953162   73711 cri.go:89] found id: ""
	I0818 20:13:36.953172   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:36.953230   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.959359   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:36.959456   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:37.011217   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.011248   73711 cri.go:89] found id: ""
	I0818 20:13:37.011258   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:37.011333   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.016895   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:37.016988   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:37.067705   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:37.067728   73711 cri.go:89] found id: ""
	I0818 20:13:37.067737   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:37.067794   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.073259   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:37.073332   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:37.112192   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.112216   73711 cri.go:89] found id: ""
	I0818 20:13:37.112226   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:37.112285   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.116988   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:37.117060   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:37.153720   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.153744   73711 cri.go:89] found id: ""
	I0818 20:13:37.153753   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:37.153811   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.158160   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:37.158226   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:37.197088   73711 cri.go:89] found id: ""
	I0818 20:13:37.197120   73711 logs.go:276] 0 containers: []
	W0818 20:13:37.197143   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:37.197151   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:37.197215   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:37.241214   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:37.241242   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:37.241248   73711 cri.go:89] found id: ""
	I0818 20:13:37.241257   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:37.241317   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.246159   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.250431   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:37.250460   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:37.313787   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:37.313817   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:37.333235   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:37.333263   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:37.461197   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:37.461236   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.505314   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:37.505343   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.576096   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:37.576121   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:38.083667   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:38.083702   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:38.128922   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:38.128947   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:38.170807   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:38.170842   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:38.265750   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:38.265784   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:38.323224   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:38.323269   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:38.372486   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:38.372530   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:38.413945   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:38.413986   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.798152   74485 addons.go:510] duration metric: took 1.411833485s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:13:38.719805   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.720446   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:40.720472   74485 pod_ready.go:82] duration metric: took 4.00699808s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:40.720482   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:42.728159   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.955186   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:13:40.960201   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:13:40.961240   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:40.961260   73711 api_server.go:131] duration metric: took 4.123017717s to wait for apiserver health ...
	I0818 20:13:40.961273   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:40.961298   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:40.961350   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:41.012093   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.012113   73711 cri.go:89] found id: ""
	I0818 20:13:41.012121   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:41.012172   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.016282   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:41.016337   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:41.063834   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.063861   73711 cri.go:89] found id: ""
	I0818 20:13:41.063871   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:41.063930   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.068645   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:41.068724   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:41.117544   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:41.117565   73711 cri.go:89] found id: ""
	I0818 20:13:41.117573   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:41.117626   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.121916   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:41.121985   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:41.161641   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:41.161660   73711 cri.go:89] found id: ""
	I0818 20:13:41.161667   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:41.161720   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.165727   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:41.165778   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:41.207519   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:41.207544   73711 cri.go:89] found id: ""
	I0818 20:13:41.207554   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:41.207615   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.212114   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:41.212171   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:41.255480   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:41.255501   73711 cri.go:89] found id: ""
	I0818 20:13:41.255508   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:41.255560   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.259585   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:41.259635   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:41.312099   73711 cri.go:89] found id: ""
	I0818 20:13:41.312124   73711 logs.go:276] 0 containers: []
	W0818 20:13:41.312131   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:41.312137   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:41.312201   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:41.358622   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:41.358647   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.358653   73711 cri.go:89] found id: ""
	I0818 20:13:41.358662   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:41.358723   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.363210   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.367271   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:41.367294   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.406329   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:41.406355   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:41.768140   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:41.768175   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:41.811010   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:41.811035   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:41.886206   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:41.886240   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.938249   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:41.938284   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.977289   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:41.977317   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:42.018606   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:42.018630   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:42.055557   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:42.055581   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:42.070467   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:42.070494   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:42.182068   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:42.182100   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:42.219346   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:42.219373   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:42.262193   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:42.262221   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:44.839152   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:13:44.839181   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.839186   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.839191   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.839194   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.839197   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.839200   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.839206   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.839212   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.839218   73711 system_pods.go:74] duration metric: took 3.877940537s to wait for pod list to return data ...
	I0818 20:13:44.839225   73711 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:44.841877   73711 default_sa.go:45] found service account: "default"
	I0818 20:13:44.841896   73711 default_sa.go:55] duration metric: took 2.662355ms for default service account to be created ...
	I0818 20:13:44.841904   73711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:44.846214   73711 system_pods.go:86] 8 kube-system pods found
	I0818 20:13:44.846240   73711 system_pods.go:89] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.846247   73711 system_pods.go:89] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.846252   73711 system_pods.go:89] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.846259   73711 system_pods.go:89] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.846264   73711 system_pods.go:89] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.846269   73711 system_pods.go:89] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.846279   73711 system_pods.go:89] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.846286   73711 system_pods.go:89] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.846296   73711 system_pods.go:126] duration metric: took 4.386348ms to wait for k8s-apps to be running ...
	I0818 20:13:44.846305   73711 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:44.846356   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:44.863225   73711 system_svc.go:56] duration metric: took 16.912117ms WaitForService to wait for kubelet
	I0818 20:13:44.863262   73711 kubeadm.go:582] duration metric: took 4m26.748456958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:44.863287   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:44.866049   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:44.866069   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:44.866082   73711 node_conditions.go:105] duration metric: took 2.789471ms to run NodePressure ...
	I0818 20:13:44.866095   73711 start.go:241] waiting for startup goroutines ...
	I0818 20:13:44.866103   73711 start.go:246] waiting for cluster config update ...
	I0818 20:13:44.866135   73711 start.go:255] writing updated cluster config ...
	I0818 20:13:44.866415   73711 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:44.914902   73711 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:44.916929   73711 out.go:177] * Done! kubectl is now configured to use "no-preload-944426" cluster and "default" namespace by default
	I0818 20:13:45.226521   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:47.226773   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:48.227026   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.227050   74485 pod_ready.go:82] duration metric: took 7.506560684s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.227061   74485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231313   74485 pod_ready.go:93] pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.231336   74485 pod_ready.go:82] duration metric: took 4.268255ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231345   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235228   74485 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.235249   74485 pod_ready.go:82] duration metric: took 3.897729ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235259   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238872   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.238889   74485 pod_ready.go:82] duration metric: took 3.623044ms for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238897   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243264   74485 pod_ready.go:93] pod "kube-proxy-hmvsl" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.243282   74485 pod_ready.go:82] duration metric: took 4.378808ms for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243292   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625076   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.625101   74485 pod_ready.go:82] duration metric: took 381.800619ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625111   74485 pod_ready.go:39] duration metric: took 11.92213071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:48.625128   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:48.625193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:48.640038   74485 api_server.go:72] duration metric: took 12.253809178s to wait for apiserver process to appear ...
	I0818 20:13:48.640061   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:48.640081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:13:48.644433   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:13:48.645289   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:48.645306   74485 api_server.go:131] duration metric: took 5.239358ms to wait for apiserver health ...
	I0818 20:13:48.645313   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:48.829655   74485 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:48.829698   74485 system_pods.go:61] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:48.829706   74485 system_pods.go:61] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:48.829718   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:48.829726   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:48.829731   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:48.829737   74485 system_pods.go:61] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:48.829742   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:48.829754   74485 system_pods.go:61] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:48.829760   74485 system_pods.go:61] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:48.829770   74485 system_pods.go:74] duration metric: took 184.451133ms to wait for pod list to return data ...
	I0818 20:13:48.829783   74485 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:49.023954   74485 default_sa.go:45] found service account: "default"
	I0818 20:13:49.023982   74485 default_sa.go:55] duration metric: took 194.191689ms for default service account to be created ...
	I0818 20:13:49.023992   74485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:49.227864   74485 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:49.227892   74485 system_pods.go:89] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:49.227898   74485 system_pods.go:89] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:49.227902   74485 system_pods.go:89] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:49.227907   74485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:49.227911   74485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:49.227915   74485 system_pods.go:89] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:49.227918   74485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:49.227925   74485 system_pods.go:89] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:49.227930   74485 system_pods.go:89] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:49.227936   74485 system_pods.go:126] duration metric: took 203.939768ms to wait for k8s-apps to be running ...
	I0818 20:13:49.227945   74485 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:49.227989   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:49.242762   74485 system_svc.go:56] duration metric: took 14.808746ms WaitForService to wait for kubelet
	I0818 20:13:49.242793   74485 kubeadm.go:582] duration metric: took 12.856565711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:49.242819   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:49.425517   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:49.425543   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:49.425555   74485 node_conditions.go:105] duration metric: took 182.731125ms to run NodePressure ...
	I0818 20:13:49.425569   74485 start.go:241] waiting for startup goroutines ...
	I0818 20:13:49.425577   74485 start.go:246] waiting for cluster config update ...
	I0818 20:13:49.425588   74485 start.go:255] writing updated cluster config ...
	I0818 20:13:49.425898   74485 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:49.473176   74485 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:49.475285   74485 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-852598" cluster and "default" namespace by default
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 
	
	
	==> CRI-O <==
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.312809740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012904312785981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34fd2e7f-1759-4f36-8a81-fd1515a77af0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.313533846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=572e0994-397f-4ae3-8503-b604fb0c9b52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.313591000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=572e0994-397f-4ae3-8503-b604fb0c9b52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.313852043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011786202865770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c43757b6fe324da3d6c9d1fbf744fb7afd3dd2bff9c1c41eb2afd2266b9cd9,PodSandboxId:63c8289ef6722c4074900368c9e398a1fd3499c4980bb8d13ab862abc4347f1c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724011773851070525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb,PodSandboxId:41290bd918a40cba9586457e308d1963be9115ed610220241526b7555330c1aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011771073937962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vqsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4e228f-22e6-4b65-a49f-ea58560346a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4,PodSandboxId:f365c38b6aad68b37ddddaef8e49f68b4dfc430320f54d3e9e9b3487afb6405e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011755355188280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2l6g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab70884b-4b6b-4ebc-ae
54-0b3216dcae47,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011755341248681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588f
d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600,PodSandboxId:1a331845bb9a9f09e5072b7bb30fe851963299e962c4b4898783497bc8b1c207,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011751655760310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3808e4a939d67f43502a70e686fad8f,},Annotations:map[string]string{io.kubern
etes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741,PodSandboxId:3313973c0817cf9b495a2f94bd413763eb525274b7db8be6d975f77da6b09381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011751711343998,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ffdac6cc9e86317bcefcc303571087,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df,PodSandboxId:451e97000001228d673d33afd0cb2888d56c141a2b6e06cd208bdfd4e6eb2c3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011751592005658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa3a73652c83efb96dc0fdb1df0ef5,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0,PodSandboxId:626763774bb386c6a121bb14e97b3118e204f240e6a4e07766afcec4d57ade92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011751614516073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aa70319472b0369a7d6acd78abc4bf,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=572e0994-397f-4ae3-8503-b604fb0c9b52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.349246989Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3b9a978-fd3b-4e4f-9d61-b06b30424632 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.349344058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3b9a978-fd3b-4e4f-9d61-b06b30424632 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.350550664Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51eaa0b1-2eb5-4e40-8741-b1782f80991a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.351053203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012904351029165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51eaa0b1-2eb5-4e40-8741-b1782f80991a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.351610405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36dd325c-d5bf-4ef2-a06e-37d116c7f938 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.351740997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36dd325c-d5bf-4ef2-a06e-37d116c7f938 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.351931698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011786202865770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c43757b6fe324da3d6c9d1fbf744fb7afd3dd2bff9c1c41eb2afd2266b9cd9,PodSandboxId:63c8289ef6722c4074900368c9e398a1fd3499c4980bb8d13ab862abc4347f1c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724011773851070525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb,PodSandboxId:41290bd918a40cba9586457e308d1963be9115ed610220241526b7555330c1aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011771073937962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vqsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4e228f-22e6-4b65-a49f-ea58560346a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4,PodSandboxId:f365c38b6aad68b37ddddaef8e49f68b4dfc430320f54d3e9e9b3487afb6405e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011755355188280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2l6g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab70884b-4b6b-4ebc-ae
54-0b3216dcae47,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011755341248681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588f
d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600,PodSandboxId:1a331845bb9a9f09e5072b7bb30fe851963299e962c4b4898783497bc8b1c207,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011751655760310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3808e4a939d67f43502a70e686fad8f,},Annotations:map[string]string{io.kubern
etes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741,PodSandboxId:3313973c0817cf9b495a2f94bd413763eb525274b7db8be6d975f77da6b09381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011751711343998,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ffdac6cc9e86317bcefcc303571087,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df,PodSandboxId:451e97000001228d673d33afd0cb2888d56c141a2b6e06cd208bdfd4e6eb2c3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011751592005658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa3a73652c83efb96dc0fdb1df0ef5,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0,PodSandboxId:626763774bb386c6a121bb14e97b3118e204f240e6a4e07766afcec4d57ade92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011751614516073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aa70319472b0369a7d6acd78abc4bf,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36dd325c-d5bf-4ef2-a06e-37d116c7f938 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.388608310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67505a1e-0028-4072-bac1-99417edcd789 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.388735997Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67505a1e-0028-4072-bac1-99417edcd789 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.390264709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a32b74f0-fc1b-4e5f-82aa-26d7feb490d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.390609878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012904390588891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a32b74f0-fc1b-4e5f-82aa-26d7feb490d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.391130818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7d97a4b-8305-4063-8496-026ed74adaa7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.391182590Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7d97a4b-8305-4063-8496-026ed74adaa7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.391401293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011786202865770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c43757b6fe324da3d6c9d1fbf744fb7afd3dd2bff9c1c41eb2afd2266b9cd9,PodSandboxId:63c8289ef6722c4074900368c9e398a1fd3499c4980bb8d13ab862abc4347f1c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724011773851070525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb,PodSandboxId:41290bd918a40cba9586457e308d1963be9115ed610220241526b7555330c1aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011771073937962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vqsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4e228f-22e6-4b65-a49f-ea58560346a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4,PodSandboxId:f365c38b6aad68b37ddddaef8e49f68b4dfc430320f54d3e9e9b3487afb6405e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011755355188280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2l6g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab70884b-4b6b-4ebc-ae
54-0b3216dcae47,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011755341248681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588f
d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600,PodSandboxId:1a331845bb9a9f09e5072b7bb30fe851963299e962c4b4898783497bc8b1c207,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011751655760310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3808e4a939d67f43502a70e686fad8f,},Annotations:map[string]string{io.kubern
etes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741,PodSandboxId:3313973c0817cf9b495a2f94bd413763eb525274b7db8be6d975f77da6b09381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011751711343998,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ffdac6cc9e86317bcefcc303571087,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df,PodSandboxId:451e97000001228d673d33afd0cb2888d56c141a2b6e06cd208bdfd4e6eb2c3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011751592005658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa3a73652c83efb96dc0fdb1df0ef5,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0,PodSandboxId:626763774bb386c6a121bb14e97b3118e204f240e6a4e07766afcec4d57ade92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011751614516073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aa70319472b0369a7d6acd78abc4bf,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7d97a4b-8305-4063-8496-026ed74adaa7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.425397004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa1878a9-4539-48a5-983c-0527be91495a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.425472083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa1878a9-4539-48a5-983c-0527be91495a name=/runtime.v1.RuntimeService/Version
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.426596425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fababbf1-2d5c-43fc-878c-7ed1db5ad8f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.427077365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012904427056174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fababbf1-2d5c-43fc-878c-7ed1db5ad8f8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.427610114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77ef1218-c68b-46e8-b6f6-f8ec8bbbf6a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.427729134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77ef1218-c68b-46e8-b6f6-f8ec8bbbf6a2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:28:24 no-preload-944426 crio[733]: time="2024-08-18 20:28:24.427930119Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724011786202865770,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588fd,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9c43757b6fe324da3d6c9d1fbf744fb7afd3dd2bff9c1c41eb2afd2266b9cd9,PodSandboxId:63c8289ef6722c4074900368c9e398a1fd3499c4980bb8d13ab862abc4347f1c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724011773851070525,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8e252dc5-cc67-484b-9b0e-9ffffbaebdf4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb,PodSandboxId:41290bd918a40cba9586457e308d1963be9115ed610220241526b7555330c1aa,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724011771073937962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-vqsgw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4e228f-22e6-4b65-a49f-ea58560346a5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4,PodSandboxId:f365c38b6aad68b37ddddaef8e49f68b4dfc430320f54d3e9e9b3487afb6405e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724011755355188280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2l6g8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab70884b-4b6b-4ebc-ae
54-0b3216dcae47,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57,PodSandboxId:9a4f1cd9d08765cc9e0025974e4ee4e6d90c1c7e75f1d7571dcdb9c37a84ebe6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724011755341248681,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b159448e-15bd-4eb0-bd7f-ddba779588f
d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600,PodSandboxId:1a331845bb9a9f09e5072b7bb30fe851963299e962c4b4898783497bc8b1c207,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724011751655760310,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3808e4a939d67f43502a70e686fad8f,},Annotations:map[string]string{io.kubern
etes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741,PodSandboxId:3313973c0817cf9b495a2f94bd413763eb525274b7db8be6d975f77da6b09381,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724011751711343998,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99ffdac6cc9e86317bcefcc303571087,},Annotations:map[string]string{io.kubernetes.containe
r.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df,PodSandboxId:451e97000001228d673d33afd0cb2888d56c141a2b6e06cd208bdfd4e6eb2c3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724011751592005658,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9aa3a73652c83efb96dc0fdb1df0ef5,},Annotations:map[string]string{io.kuber
netes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0,PodSandboxId:626763774bb386c6a121bb14e97b3118e204f240e6a4e07766afcec4d57ade92,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724011751614516073,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-944426,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7aa70319472b0369a7d6acd78abc4bf,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77ef1218-c68b-46e8-b6f6-f8ec8bbbf6a2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3bb0cae57195c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   9a4f1cd9d0876       storage-provisioner
	a9c43757b6fe3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   63c8289ef6722       busybox
	c0a76eb785f5c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   41290bd918a40       coredns-6f6b679f8f-vqsgw
	6d66c800d25d3       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      19 minutes ago      Running             kube-proxy                1                   f365c38b6aad6       kube-proxy-2l6g8
	ad65c84a94b18       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   9a4f1cd9d0876       storage-provisioner
	38c187ad4ff35       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      19 minutes ago      Running             kube-scheduler            1                   3313973c0817c       kube-scheduler-no-preload-944426
	7260b47bfedc9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   1a331845bb9a9       etcd-no-preload-944426
	568c722ae9e2f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      19 minutes ago      Running             kube-apiserver            1                   626763774bb38       kube-apiserver-no-preload-944426
	fb1a81f2aed91       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      19 minutes ago      Running             kube-controller-manager   1                   451e970000012       kube-controller-manager-no-preload-944426
	
	
	==> coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54741 - 41316 "HINFO IN 4776076796205361173.4031226827159274279. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014000429s
	
	
	==> describe nodes <==
	Name:               no-preload-944426
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-944426
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=no-preload-944426
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T19_59_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:59:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-944426
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 20:28:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 20:25:02 +0000   Sun, 18 Aug 2024 19:59:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 20:25:02 +0000   Sun, 18 Aug 2024 19:59:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 20:25:02 +0000   Sun, 18 Aug 2024 19:59:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 20:25:02 +0000   Sun, 18 Aug 2024 20:09:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.228
	  Hostname:    no-preload-944426
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba8c2789be914935b15347b81090b285
	  System UUID:                ba8c2789-be91-4935-b153-47b81090b285
	  Boot ID:                    89a85078-3e0f-4f58-977e-2125e57c6b90
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-vqsgw                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-944426                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-944426             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-944426    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-2l6g8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-944426             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-mhhbp              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-944426 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node no-preload-944426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-944426 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-944426 status is now: NodeHasSufficientMemory
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-944426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-944426 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-944426 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-944426 event: Registered Node no-preload-944426 in Controller
	  Normal  CIDRAssignmentFailed     28m                cidrAllocator    Node no-preload-944426 status is now: CIDRAssignmentFailed
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    19m (x7 over 19m)  kubelet          Node no-preload-944426 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-944426 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m (x9 over 19m)  kubelet          Node no-preload-944426 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-944426 event: Registered Node no-preload-944426 in Controller
	
	
	==> dmesg <==
	[Aug18 20:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050366] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042060] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.041370] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.678250] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625791] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.910269] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.059422] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059820] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.162580] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.157073] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[  +0.289607] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[Aug18 20:09] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.060877] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.267087] systemd-fstab-generator[1437]: Ignoring "noauto" option for root device
	[  +4.591208] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.927650] systemd-fstab-generator[2071]: Ignoring "noauto" option for root device
	[  +4.798258] kauditd_printk_skb: 58 callbacks suppressed
	[  +7.800521] kauditd_printk_skb: 8 callbacks suppressed
	[ +15.433906] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] <==
	{"level":"info","ts":"2024-08-18T20:09:12.522418Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T20:09:12.507857Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.228:2380"}
	{"level":"info","ts":"2024-08-18T20:09:12.522740Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.228:2380"}
	{"level":"info","ts":"2024-08-18T20:09:13.475816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-18T20:09:13.475925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-18T20:09:13.475975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 received MsgPreVoteResp from cda7d178093df040 at term 2"}
	{"level":"info","ts":"2024-08-18T20:09:13.476011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 became candidate at term 3"}
	{"level":"info","ts":"2024-08-18T20:09:13.476035Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 received MsgVoteResp from cda7d178093df040 at term 3"}
	{"level":"info","ts":"2024-08-18T20:09:13.476063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cda7d178093df040 became leader at term 3"}
	{"level":"info","ts":"2024-08-18T20:09:13.476089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cda7d178093df040 elected leader cda7d178093df040 at term 3"}
	{"level":"info","ts":"2024-08-18T20:09:13.478746Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:09:13.479054Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:09:13.479386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T20:09:13.479438Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T20:09:13.478752Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"cda7d178093df040","local-member-attributes":"{Name:no-preload-944426 ClientURLs:[https://192.168.61.228:2379]}","request-path":"/0/members/cda7d178093df040/attributes","cluster-id":"a6bf8e0580476be9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T20:09:13.480102Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:09:13.480137Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:09:13.481074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.228:2379"}
	{"level":"info","ts":"2024-08-18T20:09:13.481281Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T20:19:13.512445Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":891}
	{"level":"info","ts":"2024-08-18T20:19:13.523442Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":891,"took":"10.513591ms","hash":985806228,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2764800,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-08-18T20:19:13.523566Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":985806228,"revision":891,"compact-revision":-1}
	{"level":"info","ts":"2024-08-18T20:24:13.521330Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1133}
	{"level":"info","ts":"2024-08-18T20:24:13.525889Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1133,"took":"4.06347ms","hash":1479881221,"current-db-size-bytes":2764800,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-18T20:24:13.525951Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1479881221,"revision":1133,"compact-revision":891}
	
	
	==> kernel <==
	 20:28:24 up 19 min,  0 users,  load average: 0.19, 0.14, 0.10
	Linux no-preload-944426 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:24:15.776095       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:24:15.776226       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0818 20:24:15.777244       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:24:15.777273       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:25:15.778019       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:25:15.778188       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:25:15.778267       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:25:15.778307       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0818 20:25:15.779353       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:25:15.779401       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:27:15.780176       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:27:15.780540       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:27:15.780178       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:27:15.780745       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0818 20:27:15.781827       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:27:15.781896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] <==
	E0818 20:23:18.512415       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:23:18.979841       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:23:48.519543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:23:48.986952       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:24:18.525666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:24:18.993759       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:24:48.532212       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:24:49.001317       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:25:02.911390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-944426"
	E0818 20:25:18.538575       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:25:19.008267       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:25:24.000501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="188.327µs"
	I0818 20:25:36.994215       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="48.775µs"
	E0818 20:25:48.544501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:25:49.016173       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:26:18.550568       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:26:19.025110       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:26:48.556448       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:26:49.032876       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:27:18.562899       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:27:19.040901       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:27:48.569220       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:27:49.049271       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:28:18.575412       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:28:19.056349       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 20:09:15.628159       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 20:09:15.649850       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.228"]
	E0818 20:09:15.649980       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 20:09:15.689403       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 20:09:15.689434       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 20:09:15.689464       1 server_linux.go:169] "Using iptables Proxier"
	I0818 20:09:15.698972       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 20:09:15.699802       1 server.go:483] "Version info" version="v1.31.0"
	I0818 20:09:15.699883       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 20:09:15.703939       1 config.go:197] "Starting service config controller"
	I0818 20:09:15.706026       1 config.go:104] "Starting endpoint slice config controller"
	I0818 20:09:15.706952       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 20:09:15.707284       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 20:09:15.707392       1 config.go:326] "Starting node config controller"
	I0818 20:09:15.707415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 20:09:15.807738       1 shared_informer.go:320] Caches are synced for node config
	I0818 20:09:15.807786       1 shared_informer.go:320] Caches are synced for service config
	I0818 20:09:15.807978       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] <==
	I0818 20:09:12.964007       1 serving.go:386] Generated self-signed cert in-memory
	W0818 20:09:14.752088       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 20:09:14.752133       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 20:09:14.752143       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 20:09:14.752149       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 20:09:14.783216       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 20:09:14.783349       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 20:09:14.786118       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 20:09:14.786166       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 20:09:14.786355       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 20:09:14.786459       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 20:09:14.886846       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 20:27:11 no-preload-944426 kubelet[1444]: E0818 20:27:11.978776    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:27:21 no-preload-944426 kubelet[1444]: E0818 20:27:21.271018    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012841270784601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:21 no-preload-944426 kubelet[1444]: E0818 20:27:21.271076    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012841270784601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:24 no-preload-944426 kubelet[1444]: E0818 20:27:24.980113    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:27:31 no-preload-944426 kubelet[1444]: E0818 20:27:31.275145    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012851274472516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:31 no-preload-944426 kubelet[1444]: E0818 20:27:31.275465    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012851274472516,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:38 no-preload-944426 kubelet[1444]: E0818 20:27:38.979739    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:27:41 no-preload-944426 kubelet[1444]: E0818 20:27:41.278039    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012861277359632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:41 no-preload-944426 kubelet[1444]: E0818 20:27:41.278727    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012861277359632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:49 no-preload-944426 kubelet[1444]: E0818 20:27:49.980349    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:27:51 no-preload-944426 kubelet[1444]: E0818 20:27:51.281213    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012871280730372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:27:51 no-preload-944426 kubelet[1444]: E0818 20:27:51.281367    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012871280730372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:01 no-preload-944426 kubelet[1444]: E0818 20:28:01.283731    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012881283134706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:01 no-preload-944426 kubelet[1444]: E0818 20:28:01.283769    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012881283134706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:02 no-preload-944426 kubelet[1444]: E0818 20:28:02.980064    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:28:11 no-preload-944426 kubelet[1444]: E0818 20:28:11.007602    1444 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 20:28:11 no-preload-944426 kubelet[1444]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 20:28:11 no-preload-944426 kubelet[1444]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 20:28:11 no-preload-944426 kubelet[1444]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 20:28:11 no-preload-944426 kubelet[1444]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 20:28:11 no-preload-944426 kubelet[1444]: E0818 20:28:11.286510    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012891285844039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:11 no-preload-944426 kubelet[1444]: E0818 20:28:11.286553    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012891285844039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:17 no-preload-944426 kubelet[1444]: E0818 20:28:17.979834    1444 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-mhhbp" podUID="2541855e-1597-4465-b244-d0d790fe4f6b"
	Aug 18 20:28:21 no-preload-944426 kubelet[1444]: E0818 20:28:21.289106    1444 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012901288500857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:28:21 no-preload-944426 kubelet[1444]: E0818 20:28:21.289436    1444 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012901288500857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] <==
	I0818 20:09:46.288903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 20:09:46.304129       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 20:09:46.305048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 20:10:03.702554       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 20:10:03.702924       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-944426_cc3e55d8-a390-4aec-8905-21640048ba99!
	I0818 20:10:03.703160       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68bb3579-0737-406e-b932-37ac245a50d7", APIVersion:"v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-944426_cc3e55d8-a390-4aec-8905-21640048ba99 became leader
	I0818 20:10:03.806446       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-944426_cc3e55d8-a390-4aec-8905-21640048ba99!
	
	
	==> storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] <==
	I0818 20:09:15.440698       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0818 20:09:45.443185       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-944426 -n no-preload-944426
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-944426 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-mhhbp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-944426 describe pod metrics-server-6867b74b74-mhhbp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-944426 describe pod metrics-server-6867b74b74-mhhbp: exit status 1 (61.341407ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-mhhbp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-944426 describe pod metrics-server-6867b74b74-mhhbp: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (337.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (497.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-18 20:31:07.912354328 +0000 UTC m=+6778.304693603
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-852598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-852598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.784µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-852598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-852598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-852598 logs -n 25: (2.111454763s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-868662 --memory=2200 --alsologtostderr   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-291295            | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-868662 image list                           | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:02 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-852598  | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-247539        | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944426                  | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-291295                 | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:03 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-852598       | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-247539             | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:13 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:28 UTC | 18 Aug 24 20:28 UTC |
	| delete  | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:28 UTC | 18 Aug 24 20:28 UTC |
	| delete  | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:28 UTC | 18 Aug 24 20:28 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 20:04:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 20:04:42.787579   74485 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:42.787666   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787673   74485 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:42.787677   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787847   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:42.788352   74485 out.go:352] Setting JSON to false
	I0818 20:04:42.789201   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6427,"bootTime":1724005056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:42.789257   74485 start.go:139] virtualization: kvm guest
	I0818 20:04:42.791538   74485 out.go:177] * [default-k8s-diff-port-852598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:42.793185   74485 notify.go:220] Checking for updates...
	I0818 20:04:42.793204   74485 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:42.794555   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:42.795955   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:42.797158   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:42.798459   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:42.799775   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:42.801373   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:04:42.801763   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.801823   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.816564   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0818 20:04:42.816964   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.817465   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.817486   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.817807   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.818015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.818224   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:42.818511   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.818540   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.832964   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0818 20:04:42.833369   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.833866   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.833895   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.834252   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.834438   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.867522   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:42.868931   74485 start.go:297] selected driver: kvm2
	I0818 20:04:42.868948   74485 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.869074   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:42.869754   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.869835   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:42.884983   74485 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:42.885345   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:42.885408   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:04:42.885421   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:42.885450   74485 start.go:340] cluster config:
	{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.885567   74485 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.887511   74485 out.go:177] * Starting "default-k8s-diff-port-852598" primary control-plane node in "default-k8s-diff-port-852598" cluster
	I0818 20:04:42.011628   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:45.083629   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:42.888803   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:04:42.888828   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:42.888834   74485 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:42.888903   74485 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:42.888913   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 20:04:42.888991   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:04:42.889163   74485 start.go:360] acquireMachinesLock for default-k8s-diff-port-852598: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:04:51.163614   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:54.235770   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:00.315808   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:03.387719   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:09.467686   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:12.539667   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:18.619652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:21.691652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:27.771635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:30.843627   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:36.923644   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:39.995678   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:46.075611   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:49.147665   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:55.227683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:58.299638   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:04.379690   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:07.451735   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:13.531669   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:16.603729   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:22.683639   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:25.755659   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:31.835708   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:34.907693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:40.987635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:44.059673   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:50.139693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:53.211683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:59.291707   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:02.363660   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:08.443634   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:11.515633   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:17.595640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:20.667689   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:26.747640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:29.819663   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:32.823816   73815 start.go:364] duration metric: took 4m30.025550701s to acquireMachinesLock for "embed-certs-291295"
	I0818 20:07:32.823869   73815 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:32.823875   73815 fix.go:54] fixHost starting: 
	I0818 20:07:32.824270   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:32.824306   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:32.839755   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0818 20:07:32.840171   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:32.840614   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:07:32.840632   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:32.840962   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:32.841160   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:32.841303   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:07:32.842786   73815 fix.go:112] recreateIfNeeded on embed-certs-291295: state=Stopped err=<nil>
	I0818 20:07:32.842814   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	W0818 20:07:32.842974   73815 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:32.844743   73815 out.go:177] * Restarting existing kvm2 VM for "embed-certs-291295" ...
	I0818 20:07:32.821304   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:32.821364   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821657   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:07:32.821683   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821904   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:07:32.823683   73711 machine.go:96] duration metric: took 4m37.430465042s to provisionDockerMachine
	I0818 20:07:32.823720   73711 fix.go:56] duration metric: took 4m37.451071449s for fixHost
	I0818 20:07:32.823727   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 4m37.451091077s
	W0818 20:07:32.823754   73711 start.go:714] error starting host: provision: host is not running
	W0818 20:07:32.823846   73711 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0818 20:07:32.823855   73711 start.go:729] Will try again in 5 seconds ...
	I0818 20:07:32.846149   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Start
	I0818 20:07:32.846317   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring networks are active...
	I0818 20:07:32.847049   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network default is active
	I0818 20:07:32.847478   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network mk-embed-certs-291295 is active
	I0818 20:07:32.847854   73815 main.go:141] libmachine: (embed-certs-291295) Getting domain xml...
	I0818 20:07:32.848748   73815 main.go:141] libmachine: (embed-certs-291295) Creating domain...
	I0818 20:07:34.053380   73815 main.go:141] libmachine: (embed-certs-291295) Waiting to get IP...
	I0818 20:07:34.054322   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.054765   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.054850   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.054751   75081 retry.go:31] will retry after 299.809444ms: waiting for machine to come up
	I0818 20:07:34.356537   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.356955   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.357014   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.356932   75081 retry.go:31] will retry after 366.714086ms: waiting for machine to come up
	I0818 20:07:34.725440   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.725885   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.725915   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.725839   75081 retry.go:31] will retry after 427.074526ms: waiting for machine to come up
	I0818 20:07:35.154258   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.154660   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.154682   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.154633   75081 retry.go:31] will retry after 565.117984ms: waiting for machine to come up
	I0818 20:07:35.721302   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.721729   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.721757   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.721686   75081 retry.go:31] will retry after 630.987814ms: waiting for machine to come up
	I0818 20:07:36.354566   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:36.354981   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:36.355016   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:36.354951   75081 retry.go:31] will retry after 697.865559ms: waiting for machine to come up
	I0818 20:07:37.054868   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.055232   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.055260   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.055188   75081 retry.go:31] will retry after 898.995052ms: waiting for machine to come up
	I0818 20:07:37.824187   73711 start.go:360] acquireMachinesLock for no-preload-944426: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:37.955672   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.956089   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.956115   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.956038   75081 retry.go:31] will retry after 1.482185836s: waiting for machine to come up
	I0818 20:07:39.440488   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:39.440838   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:39.440889   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:39.440794   75081 retry.go:31] will retry after 1.695604547s: waiting for machine to come up
	I0818 20:07:41.138708   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:41.139203   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:41.139231   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:41.139166   75081 retry.go:31] will retry after 1.806916927s: waiting for machine to come up
	I0818 20:07:42.947942   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:42.948344   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:42.948402   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:42.948319   75081 retry.go:31] will retry after 2.664923271s: waiting for machine to come up
	I0818 20:07:45.616102   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:45.616454   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:45.616482   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:45.616411   75081 retry.go:31] will retry after 3.460207847s: waiting for machine to come up
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:49.078185   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078646   73815 main.go:141] libmachine: (embed-certs-291295) Found IP for machine: 192.168.39.125
	I0818 20:07:49.078676   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has current primary IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078682   73815 main.go:141] libmachine: (embed-certs-291295) Reserving static IP address...
	I0818 20:07:49.079061   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.079091   73815 main.go:141] libmachine: (embed-certs-291295) Reserved static IP address: 192.168.39.125
	I0818 20:07:49.079112   73815 main.go:141] libmachine: (embed-certs-291295) DBG | skip adding static IP to network mk-embed-certs-291295 - found existing host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"}
	I0818 20:07:49.079132   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Getting to WaitForSSH function...
	I0818 20:07:49.079148   73815 main.go:141] libmachine: (embed-certs-291295) Waiting for SSH to be available...
	I0818 20:07:49.081287   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081592   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.081645   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081761   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH client type: external
	I0818 20:07:49.081788   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa (-rw-------)
	I0818 20:07:49.081823   73815 main.go:141] libmachine: (embed-certs-291295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:07:49.081841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | About to run SSH command:
	I0818 20:07:49.081854   73815 main.go:141] libmachine: (embed-certs-291295) DBG | exit 0
	I0818 20:07:49.207649   73815 main.go:141] libmachine: (embed-certs-291295) DBG | SSH cmd err, output: <nil>: 
	I0818 20:07:49.208007   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetConfigRaw
	I0818 20:07:49.208604   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.211088   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211436   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.211464   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211685   73815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:07:49.211906   73815 machine.go:93] provisionDockerMachine start ...
	I0818 20:07:49.211932   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:49.212156   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.214381   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214696   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.214722   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214838   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.215001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215139   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215264   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.215402   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.215637   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.215650   73815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:07:49.327972   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:07:49.328001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328234   73815 buildroot.go:166] provisioning hostname "embed-certs-291295"
	I0818 20:07:49.328286   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328495   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.331272   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331667   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.331695   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331795   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.331967   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332124   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332235   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.332387   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.332602   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.332620   73815 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-291295 && echo "embed-certs-291295" | sudo tee /etc/hostname
	I0818 20:07:49.457656   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-291295
	
	I0818 20:07:49.457692   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.460362   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460692   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.460724   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460821   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.461040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461269   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461419   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.461593   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.461791   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.461807   73815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-291295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-291295/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-291295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:07:49.580418   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:49.580448   73815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:07:49.580487   73815 buildroot.go:174] setting up certificates
	I0818 20:07:49.580501   73815 provision.go:84] configureAuth start
	I0818 20:07:49.580513   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.580787   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.583435   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.583801   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.583825   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.584097   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.586253   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586572   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.586606   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586700   73815 provision.go:143] copyHostCerts
	I0818 20:07:49.586764   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:07:49.586786   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:07:49.586863   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:07:49.586984   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:07:49.586994   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:07:49.587034   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:07:49.587134   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:07:49.587144   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:07:49.587182   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:07:49.587257   73815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-291295 san=[127.0.0.1 192.168.39.125 embed-certs-291295 localhost minikube]
	I0818 20:07:49.844689   73815 provision.go:177] copyRemoteCerts
	I0818 20:07:49.844745   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:07:49.844767   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.847172   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.847517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847700   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.847898   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.848060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.848210   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:49.933798   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:07:49.957958   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:07:49.981551   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:07:50.004238   73815 provision.go:87] duration metric: took 423.726052ms to configureAuth
	I0818 20:07:50.004263   73815 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:07:50.004431   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:07:50.004494   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.006759   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007031   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.007059   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007217   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.007437   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007603   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007729   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.007894   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.008058   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.008072   73815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:07:50.287001   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:07:50.287027   73815 machine.go:96] duration metric: took 1.075103653s to provisionDockerMachine
	I0818 20:07:50.287038   73815 start.go:293] postStartSetup for "embed-certs-291295" (driver="kvm2")
	I0818 20:07:50.287047   73815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:07:50.287067   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.287451   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:07:50.287478   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.290150   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290493   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.290515   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290727   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.290911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.291096   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.291233   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.379621   73815 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:07:50.388749   73815 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:07:50.388772   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:07:50.388844   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:07:50.388927   73815 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:07:50.389046   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:07:50.398957   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:50.422817   73815 start.go:296] duration metric: took 135.767247ms for postStartSetup
	I0818 20:07:50.422859   73815 fix.go:56] duration metric: took 17.598982329s for fixHost
	I0818 20:07:50.422886   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.425514   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.425899   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.425926   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.426113   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.426332   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426505   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426623   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.426798   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.427018   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.427033   73815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:07:50.540087   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011670.500173623
	
	I0818 20:07:50.540113   73815 fix.go:216] guest clock: 1724011670.500173623
	I0818 20:07:50.540122   73815 fix.go:229] Guest: 2024-08-18 20:07:50.500173623 +0000 UTC Remote: 2024-08-18 20:07:50.42286401 +0000 UTC m=+287.764343419 (delta=77.309613ms)
	I0818 20:07:50.540140   73815 fix.go:200] guest clock delta is within tolerance: 77.309613ms
	I0818 20:07:50.540145   73815 start.go:83] releasing machines lock for "embed-certs-291295", held for 17.716293127s
	I0818 20:07:50.540172   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.540462   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:50.543280   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543688   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.543721   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544386   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544639   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544698   73815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:07:50.544749   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.544889   73815 ssh_runner.go:195] Run: cat /version.json
	I0818 20:07:50.544913   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.547481   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547813   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.547841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547867   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547962   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548165   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548281   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.548307   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.548340   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548431   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548515   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.548576   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548701   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548874   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.628660   73815 ssh_runner.go:195] Run: systemctl --version
	I0818 20:07:50.653164   73815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:07:50.799158   73815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:07:50.805063   73815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:07:50.805134   73815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:07:50.820796   73815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:07:50.820822   73815 start.go:495] detecting cgroup driver to use...
	I0818 20:07:50.820901   73815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:07:50.837574   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:07:50.851913   73815 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:07:50.851981   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:07:50.865595   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:07:50.879240   73815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:07:50.990057   73815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:07:51.151540   73815 docker.go:233] disabling docker service ...
	I0818 20:07:51.151618   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:07:51.166231   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:07:51.180949   73815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:07:51.329174   73815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:07:51.460564   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:07:51.474929   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:07:51.494510   73815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:07:51.494573   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.507465   73815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:07:51.507533   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.519207   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.535742   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.551186   73815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:07:51.563233   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.574714   73815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.597948   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.609883   73815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:07:51.621040   73815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:07:51.621115   73815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:07:51.636305   73815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:07:51.646895   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:51.781890   73815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:07:51.927722   73815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:07:51.927799   73815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:07:51.932918   73815 start.go:563] Will wait 60s for crictl version
	I0818 20:07:51.933006   73815 ssh_runner.go:195] Run: which crictl
	I0818 20:07:51.936917   73815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:07:51.981063   73815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:07:51.981141   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.008566   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.041182   73815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:07:52.042348   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:52.045196   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045559   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:52.045588   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045764   73815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 20:07:52.050188   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:52.065105   73815 kubeadm.go:883] updating cluster {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:07:52.065244   73815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:07:52.065300   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:52.108608   73815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:07:52.108687   73815 ssh_runner.go:195] Run: which lz4
	I0818 20:07:52.112897   73815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:07:52.117388   73815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:07:52.117421   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:53.532527   73815 crio.go:462] duration metric: took 1.419668609s to copy over tarball
	I0818 20:07:53.532605   73815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:07:55.664780   73815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132141788s)
	I0818 20:07:55.664810   73815 crio.go:469] duration metric: took 2.132257968s to extract the tarball
	I0818 20:07:55.664820   73815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:07:55.702662   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:55.745782   73815 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:07:55.745801   73815 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:07:55.745809   73815 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0818 20:07:55.745921   73815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-291295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:07:55.745985   73815 ssh_runner.go:195] Run: crio config
	I0818 20:07:55.788458   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:07:55.788484   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:07:55.788503   73815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:07:55.788537   73815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-291295 NodeName:embed-certs-291295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:07:55.788723   73815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-291295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:07:55.788800   73815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:07:55.798787   73815 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:07:55.798860   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:07:55.808532   73815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0818 20:07:55.825731   73815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:07:55.842287   73815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0818 20:07:55.860058   73815 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0818 20:07:55.864007   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:55.876297   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:55.999076   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:07:56.015305   73815 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295 for IP: 192.168.39.125
	I0818 20:07:56.015325   73815 certs.go:194] generating shared ca certs ...
	I0818 20:07:56.015339   73815 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:07:56.015505   73815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:07:56.015548   73815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:07:56.015557   73815 certs.go:256] generating profile certs ...
	I0818 20:07:56.015633   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/client.key
	I0818 20:07:56.015689   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key.a8bddcfe
	I0818 20:07:56.015732   73815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key
	I0818 20:07:56.015846   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:07:56.015885   73815 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:07:56.015898   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:07:56.015953   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:07:56.015979   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:07:56.015999   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:07:56.016036   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:56.016660   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:07:56.044323   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:07:56.079231   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:07:56.111738   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:07:56.134817   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0818 20:07:56.160819   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:07:56.185806   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:07:56.210116   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:07:56.234185   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:07:56.256896   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:07:56.279505   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:07:56.302178   73815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:07:56.318931   73815 ssh_runner.go:195] Run: openssl version
	I0818 20:07:56.324865   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:07:56.336272   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340825   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340872   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.346515   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:07:56.357471   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:07:56.368211   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372600   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372662   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.378152   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:07:56.388868   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:07:56.399297   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403628   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403663   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.409041   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:07:56.419342   73815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:07:56.423757   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:07:56.429341   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:07:56.435012   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:07:56.440752   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:07:56.446305   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:07:56.452219   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:07:56.458004   73815 kubeadm.go:392] StartCluster: {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:07:56.458133   73815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:07:56.458181   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.495200   73815 cri.go:89] found id: ""
	I0818 20:07:56.495281   73815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:07:56.505834   73815 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:07:56.505854   73815 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:07:56.505903   73815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:07:56.516025   73815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:07:56.516962   73815 kubeconfig.go:125] found "embed-certs-291295" server: "https://192.168.39.125:8443"
	I0818 20:07:56.518789   73815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:07:56.528513   73815 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0818 20:07:56.528541   73815 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:07:56.528556   73815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:07:56.528612   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.568091   73815 cri.go:89] found id: ""
	I0818 20:07:56.568161   73815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:07:56.584012   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:07:56.593697   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:07:56.593712   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:07:56.593746   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:07:56.603071   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:07:56.603112   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:07:56.612422   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:07:56.621194   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:07:56.621243   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:07:56.630252   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.640086   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:07:56.640138   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.649323   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:07:56.658055   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:07:56.658110   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:07:56.667134   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:07:56.676460   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.783806   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.515850   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:07:57.710065   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.780213   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.854365   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:07:57.854458   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.355246   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.854602   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.355211   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.854991   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.354593   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.368818   73815 api_server.go:72] duration metric: took 2.514473789s to wait for apiserver process to appear ...
	I0818 20:08:00.368844   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:00.368866   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.832413   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:02.832449   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:02.832466   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.924768   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.924804   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:02.924820   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.929839   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.929869   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.369350   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.373766   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.373796   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.869333   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.874889   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.874919   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:04.369187   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:04.374739   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:08:04.383736   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:04.383764   73815 api_server.go:131] duration metric: took 4.014913233s to wait for apiserver health ...
	I0818 20:08:04.383773   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:08:04.383779   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:04.385486   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:04.386800   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:04.403476   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:04.422354   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:04.435181   73815 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:04.435222   73815 system_pods.go:61] "coredns-6f6b679f8f-wvd9k" [02369649-1565-437d-8b19-a67adfe13d45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:04.435237   73815 system_pods.go:61] "etcd-embed-certs-291295" [1e9f0b7d-bb65-4867-821e-b9af34338b3e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:04.435246   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [bb884a00-e058-4348-bc6a-427c64f4c68d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:04.435261   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [3a359998-cdb6-46ef-a018-e03e70cb33e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:04.435269   73815 system_pods.go:61] "kube-proxy-5fjm2" [bb15b1d9-8221-473a-b0c7-8c65b3b18bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 20:08:04.435276   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [4ed7725a-b0e6-4bc0-b0bd-913eb15fd4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:04.435287   73815 system_pods.go:61] "metrics-server-6867b74b74-g2kt7" [c23cc238-51f0-402c-a0c1-4aecc020d845] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:04.435294   73815 system_pods.go:61] "storage-provisioner" [2dcad3a1-15f0-41b9-8398-5a6e2d8763b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 20:08:04.435303   73815 system_pods.go:74] duration metric: took 12.928394ms to wait for pod list to return data ...
	I0818 20:08:04.435314   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:04.439127   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:04.439150   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:04.439161   73815 node_conditions.go:105] duration metric: took 3.84281ms to run NodePressure ...
	I0818 20:08:04.439176   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:04.720705   73815 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726814   73815 kubeadm.go:739] kubelet initialised
	I0818 20:08:04.726835   73815 kubeadm.go:740] duration metric: took 6.104356ms waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726843   73815 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:04.736000   73815 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.741473   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741509   73815 pod_ready.go:82] duration metric: took 5.472852ms for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.741523   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741534   73815 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.749841   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749872   73815 pod_ready.go:82] duration metric: took 8.326743ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.749883   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749891   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.756947   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.756997   73815 pod_ready.go:82] duration metric: took 7.079861ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.757011   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.757019   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.825829   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825865   73815 pod_ready.go:82] duration metric: took 68.834734ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.825878   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825888   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225761   73815 pod_ready.go:93] pod "kube-proxy-5fjm2" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:05.225786   73815 pod_ready.go:82] duration metric: took 399.888138ms for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225796   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:07.232250   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.744305   74485 start.go:364] duration metric: took 3m27.85511004s to acquireMachinesLock for "default-k8s-diff-port-852598"
	I0818 20:08:10.744365   74485 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:10.744384   74485 fix.go:54] fixHost starting: 
	I0818 20:08:10.744751   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:10.744791   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:10.764317   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0818 20:08:10.764799   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:10.765323   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:08:10.765349   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:10.765723   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:10.765929   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:10.766110   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:08:10.767735   74485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-852598: state=Stopped err=<nil>
	I0818 20:08:10.767763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	W0818 20:08:10.767931   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:10.770197   74485 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-852598" ...
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:09.234086   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:11.732954   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.771461   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Start
	I0818 20:08:10.771638   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring networks are active...
	I0818 20:08:10.772332   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network default is active
	I0818 20:08:10.772645   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network mk-default-k8s-diff-port-852598 is active
	I0818 20:08:10.773119   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Getting domain xml...
	I0818 20:08:10.773840   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Creating domain...
	I0818 20:08:12.058765   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting to get IP...
	I0818 20:08:12.059745   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060236   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.060152   75353 retry.go:31] will retry after 227.793826ms: waiting for machine to come up
	I0818 20:08:12.289622   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290038   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290061   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.290013   75353 retry.go:31] will retry after 288.501286ms: waiting for machine to come up
	I0818 20:08:12.580672   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581158   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.581120   75353 retry.go:31] will retry after 460.489481ms: waiting for machine to come up
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:13.736819   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:14.732761   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:14.732783   73815 pod_ready.go:82] duration metric: took 9.506980075s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:14.732792   73815 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:16.739855   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:13.042839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043444   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043475   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.043413   75353 retry.go:31] will retry after 542.076458ms: waiting for machine to come up
	I0818 20:08:13.586675   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587296   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587326   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.587216   75353 retry.go:31] will retry after 553.588704ms: waiting for machine to come up
	I0818 20:08:14.142076   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142714   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142737   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.142616   75353 retry.go:31] will retry after 852.179264ms: waiting for machine to come up
	I0818 20:08:14.996732   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997226   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997258   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.997175   75353 retry.go:31] will retry after 732.180291ms: waiting for machine to come up
	I0818 20:08:15.731247   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731741   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731771   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:15.731699   75353 retry.go:31] will retry after 1.456328641s: waiting for machine to come up
	I0818 20:08:17.189586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190071   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:17.189997   75353 retry.go:31] will retry after 1.632315907s: waiting for machine to come up
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:18.740885   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:21.239526   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:18.824458   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824827   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824859   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:18.824772   75353 retry.go:31] will retry after 2.077122736s: waiting for machine to come up
	I0818 20:08:20.903734   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904176   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904203   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:20.904139   75353 retry.go:31] will retry after 1.975638775s: waiting for machine to come up
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.239765   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:25.739127   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:22.882020   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882511   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:22.882450   75353 retry.go:31] will retry after 3.362090127s: waiting for machine to come up
	I0818 20:08:26.246148   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246523   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246547   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:26.246479   75353 retry.go:31] will retry after 3.188423251s: waiting for machine to come up
	I0818 20:08:30.732227   73711 start.go:364] duration metric: took 52.90798246s to acquireMachinesLock for "no-preload-944426"
	I0818 20:08:30.732291   73711 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:30.732302   73711 fix.go:54] fixHost starting: 
	I0818 20:08:30.732702   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:30.732738   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:30.749873   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0818 20:08:30.750371   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:30.750922   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:08:30.750951   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:30.751323   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:30.751547   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:30.751748   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:08:30.753437   73711 fix.go:112] recreateIfNeeded on no-preload-944426: state=Stopped err=<nil>
	I0818 20:08:30.753460   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	W0818 20:08:30.753623   73711 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:30.756026   73711 out.go:177] * Restarting existing kvm2 VM for "no-preload-944426" ...
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.438706   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439209   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Found IP for machine: 192.168.72.111
	I0818 20:08:29.439225   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserving static IP address...
	I0818 20:08:29.439241   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has current primary IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439712   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.439740   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | skip adding static IP to network mk-default-k8s-diff-port-852598 - found existing host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"}
	I0818 20:08:29.439754   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserved static IP address: 192.168.72.111
	I0818 20:08:29.439769   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for SSH to be available...
	I0818 20:08:29.439786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Getting to WaitForSSH function...
	I0818 20:08:29.442039   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442351   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.442378   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442515   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH client type: external
	I0818 20:08:29.442545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa (-rw-------)
	I0818 20:08:29.442569   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:29.442580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | About to run SSH command:
	I0818 20:08:29.442592   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | exit 0
	I0818 20:08:29.567586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:29.567935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetConfigRaw
	I0818 20:08:29.568553   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.570763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571150   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.571183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571367   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:08:29.571585   74485 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:29.571608   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:29.571839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.574102   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574560   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.574598   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574753   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.574920   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575219   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.575421   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.575610   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.575623   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:29.683677   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:29.683705   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.683980   74485 buildroot.go:166] provisioning hostname "default-k8s-diff-port-852598"
	I0818 20:08:29.684010   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.684210   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.687062   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687490   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.687518   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687656   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.687817   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.687954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.688105   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.688270   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.688444   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.688457   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-852598 && echo "default-k8s-diff-port-852598" | sudo tee /etc/hostname
	I0818 20:08:29.810790   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-852598
	
	I0818 20:08:29.810821   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.813448   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.813868   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.814159   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814322   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814457   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.814613   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.814821   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.814847   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-852598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-852598/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-852598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:29.934730   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:29.934762   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:29.934818   74485 buildroot.go:174] setting up certificates
	I0818 20:08:29.934834   74485 provision.go:84] configureAuth start
	I0818 20:08:29.934848   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.935133   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.938004   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938365   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.938385   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938612   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.940910   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941267   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.941298   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941376   74485 provision.go:143] copyHostCerts
	I0818 20:08:29.941429   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:29.941446   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:29.941498   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:29.941583   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:29.941591   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:29.941609   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:29.941657   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:29.941664   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:29.941683   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:29.941726   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-852598 san=[127.0.0.1 192.168.72.111 default-k8s-diff-port-852598 localhost minikube]
	I0818 20:08:30.047223   74485 provision.go:177] copyRemoteCerts
	I0818 20:08:30.047284   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:30.047310   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.049891   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050165   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.050195   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050394   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.050580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.050750   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.050910   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.133873   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:30.158887   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0818 20:08:30.183930   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 20:08:30.208851   74485 provision.go:87] duration metric: took 274.002401ms to configureAuth
	I0818 20:08:30.208888   74485 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:30.209075   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:30.209144   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.211913   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212274   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.212305   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212521   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.212718   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.212897   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.213060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.213313   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.213531   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.213564   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:30.490496   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:30.490524   74485 machine.go:96] duration metric: took 918.924484ms to provisionDockerMachine
	I0818 20:08:30.490541   74485 start.go:293] postStartSetup for "default-k8s-diff-port-852598" (driver="kvm2")
	I0818 20:08:30.490555   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:30.490576   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.490879   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:30.490904   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.493538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.493863   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.493894   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.494015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.494211   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.494367   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.494513   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.582020   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:30.586488   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:30.586510   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:30.586568   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:30.586656   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:30.586743   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:30.595907   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:30.619808   74485 start.go:296] duration metric: took 129.254668ms for postStartSetup
	I0818 20:08:30.619842   74485 fix.go:56] duration metric: took 19.875457987s for fixHost
	I0818 20:08:30.619861   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.622487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622802   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.622836   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.623181   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623338   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623489   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.623663   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.623819   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.623829   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:30.732011   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011710.692571104
	
	I0818 20:08:30.732033   74485 fix.go:216] guest clock: 1724011710.692571104
	I0818 20:08:30.732040   74485 fix.go:229] Guest: 2024-08-18 20:08:30.692571104 +0000 UTC Remote: 2024-08-18 20:08:30.619845545 +0000 UTC m=+227.865652589 (delta=72.725559ms)
	I0818 20:08:30.732088   74485 fix.go:200] guest clock delta is within tolerance: 72.725559ms
	I0818 20:08:30.732098   74485 start.go:83] releasing machines lock for "default-k8s-diff-port-852598", held for 19.987759602s
	I0818 20:08:30.732126   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.732380   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:30.735249   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735696   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.735724   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735987   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736665   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736886   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736961   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:30.737002   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.737212   74485 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:30.737240   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.740016   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740246   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740447   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740470   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740646   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740650   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740739   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740949   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740956   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741415   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741427   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741608   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.741699   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.821128   74485 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:30.848919   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:30.997885   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:31.004578   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:31.004656   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:31.023770   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:31.023801   74485 start.go:495] detecting cgroup driver to use...
	I0818 20:08:31.023873   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:31.040507   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:31.054848   74485 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:31.054901   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:31.069584   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:31.089532   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:31.214560   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:31.394507   74485 docker.go:233] disabling docker service ...
	I0818 20:08:31.394571   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:31.411295   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:31.427312   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:31.547148   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:31.669942   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:31.686214   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:31.711412   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:31.711474   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.723281   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:31.723346   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.735488   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.748029   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.762456   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:31.779045   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.793816   74485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.816892   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.829236   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:31.842943   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:31.843000   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:31.858422   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:31.870179   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:32.003783   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:32.160300   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:32.160368   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:32.165424   74485 start.go:563] Will wait 60s for crictl version
	I0818 20:08:32.165472   74485 ssh_runner.go:195] Run: which crictl
	I0818 20:08:32.169268   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:32.211667   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:32.211758   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.242366   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.272343   74485 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:27.739698   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:30.239242   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.240089   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.273652   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:32.277017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277362   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:32.277395   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277654   74485 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:32.282225   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:32.306870   74485 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:32.306980   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:32.307040   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:32.350393   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:32.350473   74485 ssh_runner.go:195] Run: which lz4
	I0818 20:08:32.355129   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:32.359816   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:32.359839   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:08:30.757329   73711 main.go:141] libmachine: (no-preload-944426) Calling .Start
	I0818 20:08:30.757514   73711 main.go:141] libmachine: (no-preload-944426) Ensuring networks are active...
	I0818 20:08:30.758286   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network default is active
	I0818 20:08:30.758667   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network mk-no-preload-944426 is active
	I0818 20:08:30.759084   73711 main.go:141] libmachine: (no-preload-944426) Getting domain xml...
	I0818 20:08:30.759889   73711 main.go:141] libmachine: (no-preload-944426) Creating domain...
	I0818 20:08:32.064235   73711 main.go:141] libmachine: (no-preload-944426) Waiting to get IP...
	I0818 20:08:32.065149   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.065617   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.065693   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.065614   75550 retry.go:31] will retry after 223.046315ms: waiting for machine to come up
	I0818 20:08:32.290000   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.290486   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.290517   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.290460   75550 retry.go:31] will retry after 359.595476ms: waiting for machine to come up
	I0818 20:08:32.652293   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.652922   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.652953   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.652891   75550 retry.go:31] will retry after 355.131428ms: waiting for machine to come up
	I0818 20:08:33.009174   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.009664   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.009692   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.009620   75550 retry.go:31] will retry after 433.765107ms: waiting for machine to come up
	I0818 20:08:33.445297   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.446028   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.446057   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.446005   75550 retry.go:31] will retry after 547.853366ms: waiting for machine to come up
	I0818 20:08:33.995808   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.996537   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.996569   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.996500   75550 retry.go:31] will retry after 830.882652ms: waiting for machine to come up
	I0818 20:08:34.828636   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:34.829139   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:34.829169   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:34.829088   75550 retry.go:31] will retry after 1.034176215s: waiting for machine to come up
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.240826   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:36.740440   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:33.831827   74485 crio.go:462] duration metric: took 1.476738272s to copy over tarball
	I0818 20:08:33.831892   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:36.080107   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.24818669s)
	I0818 20:08:36.080141   74485 crio.go:469] duration metric: took 2.248285769s to extract the tarball
	I0818 20:08:36.080159   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:36.120912   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:36.170431   74485 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:08:36.170455   74485 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:08:36.170463   74485 kubeadm.go:934] updating node { 192.168.72.111 8444 v1.31.0 crio true true} ...
	I0818 20:08:36.170563   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-852598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:36.170628   74485 ssh_runner.go:195] Run: crio config
	I0818 20:08:36.215464   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:36.215491   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:36.215504   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:36.215528   74485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-852598 NodeName:default-k8s-diff-port-852598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:08:36.215652   74485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-852598"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:36.215718   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:08:36.227163   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:36.227254   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:36.237577   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0818 20:08:36.254898   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:36.273530   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0818 20:08:36.290824   74485 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:36.294542   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:36.306822   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:36.443673   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:36.461205   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598 for IP: 192.168.72.111
	I0818 20:08:36.461232   74485 certs.go:194] generating shared ca certs ...
	I0818 20:08:36.461252   74485 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:36.461420   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:36.461492   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:36.461505   74485 certs.go:256] generating profile certs ...
	I0818 20:08:36.461621   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/client.key
	I0818 20:08:36.461717   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key.44a0f5ad
	I0818 20:08:36.461783   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key
	I0818 20:08:36.461930   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:36.461983   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:36.461998   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:36.462026   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:36.462077   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:36.462112   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:36.462167   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:36.462916   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:36.512610   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:36.558616   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:36.595755   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:36.638264   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 20:08:36.669336   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:08:36.692480   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:36.717235   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:08:36.742220   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:36.765505   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:36.789279   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:36.813777   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:36.831256   74485 ssh_runner.go:195] Run: openssl version
	I0818 20:08:36.837184   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:36.848123   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853030   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853089   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.859016   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:36.871084   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:36.882581   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.888943   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.889008   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.896841   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:36.911762   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:36.923029   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.927982   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.928039   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.934165   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:36.946794   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:36.951686   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:36.957905   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:36.964071   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:36.970369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:36.976369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:36.982386   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:36.988286   74485 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:36.988382   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:36.988433   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.036383   74485 cri.go:89] found id: ""
	I0818 20:08:37.036472   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:37.047135   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:37.047159   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:37.047204   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:37.058133   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:37.059236   74485 kubeconfig.go:125] found "default-k8s-diff-port-852598" server: "https://192.168.72.111:8444"
	I0818 20:08:37.061368   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:37.072922   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0818 20:08:37.072961   74485 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:37.072975   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:37.073035   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.120622   74485 cri.go:89] found id: ""
	I0818 20:08:37.120713   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:37.138564   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:37.149091   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:37.149114   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:37.149167   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:08:37.160298   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:37.160364   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:37.170717   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:08:37.180261   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:37.180337   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:37.190466   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.200331   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:37.200407   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.210729   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:08:37.220302   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:37.220379   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:37.230616   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:37.241303   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:37.365964   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:35.865644   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:35.866148   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:35.866176   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:35.866094   75550 retry.go:31] will retry after 1.30047863s: waiting for machine to come up
	I0818 20:08:37.168446   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:37.168947   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:37.168985   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:37.168886   75550 retry.go:31] will retry after 1.143148547s: waiting for machine to come up
	I0818 20:08:38.314142   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:38.314622   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:38.314645   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:38.314568   75550 retry.go:31] will retry after 2.106630797s: waiting for machine to come up
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.240817   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:41.741780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:38.322305   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.523945   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.627637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.794218   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:38.794298   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.295075   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.795095   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.810749   74485 api_server.go:72] duration metric: took 1.016560665s to wait for apiserver process to appear ...
	I0818 20:08:39.810778   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:39.810802   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:39.811324   74485 api_server.go:269] stopped: https://192.168.72.111:8444/healthz: Get "https://192.168.72.111:8444/healthz": dial tcp 192.168.72.111:8444: connect: connection refused
	I0818 20:08:40.311081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.309160   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:42.309190   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:42.309206   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.364083   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.364123   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:42.364148   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.370890   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.370918   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:40.423364   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:40.423886   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:40.423909   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:40.423851   75550 retry.go:31] will retry after 2.350918177s: waiting for machine to come up
	I0818 20:08:42.776801   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:42.777407   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:42.777440   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:42.777361   75550 retry.go:31] will retry after 3.529824243s: waiting for machine to come up
	I0818 20:08:42.815322   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.823702   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.823738   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.311540   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.317503   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.317537   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.810955   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.816976   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.817005   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.311718   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.316009   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.316038   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.811634   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.816069   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.816095   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.311732   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.317099   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:45.317122   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.811063   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.815319   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:08:45.821699   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:45.821728   74485 api_server.go:131] duration metric: took 6.010942001s to wait for apiserver health ...
	I0818 20:08:45.821739   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:45.821774   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:45.823968   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.239827   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:46.240539   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:45.825235   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:45.836398   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:45.854746   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:45.866305   74485 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:45.866335   74485 system_pods.go:61] "coredns-6f6b679f8f-zfdn9" [8ed412a0-912d-4619-a2d8-2378f921037b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:45.866344   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [efa18356-f8dd-4fe4-acc6-59f859e7becf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:45.866351   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [b92f2056-c5b6-4a2f-8519-a83b2350866f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:45.866359   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [7eb6a474-891d-442e-bd85-4ca766312f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:45.866365   74485 system_pods.go:61] "kube-proxy-h8bpj" [472e231d-df71-44d6-8873-23d7e43d43d2] Running
	I0818 20:08:45.866375   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [43dccb14-0125-4d48-9537-8a87c865b586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:45.866381   74485 system_pods.go:61] "metrics-server-6867b74b74-brqj6" [de1c0894-2b42-4728-bf63-bea36c5aa0d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:45.866387   74485 system_pods.go:61] "storage-provisioner" [41499d9e-d3cf-4dbc-9464-998a1f2c6186] Running
	I0818 20:08:45.866395   74485 system_pods.go:74] duration metric: took 11.62616ms to wait for pod list to return data ...
	I0818 20:08:45.866411   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:45.870540   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:45.870564   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:45.870578   74485 node_conditions.go:105] duration metric: took 4.15805ms to run NodePressure ...
	I0818 20:08:45.870597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:46.138555   74485 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142738   74485 kubeadm.go:739] kubelet initialised
	I0818 20:08:46.142758   74485 kubeadm.go:740] duration metric: took 4.173219ms waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142765   74485 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:46.147199   74485 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.151726   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151751   74485 pod_ready.go:82] duration metric: took 4.528706ms for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.151762   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151770   74485 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.155962   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.155984   74485 pod_ready.go:82] duration metric: took 4.203038ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.155996   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.156002   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.159739   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159759   74485 pod_ready.go:82] duration metric: took 3.749616ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.159769   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159777   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.309056   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:46.309441   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:46.309470   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:46.309395   75550 retry.go:31] will retry after 3.741295193s: waiting for machine to come up
	I0818 20:08:50.052617   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053049   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has current primary IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053070   73711 main.go:141] libmachine: (no-preload-944426) Found IP for machine: 192.168.61.228
	I0818 20:08:50.053083   73711 main.go:141] libmachine: (no-preload-944426) Reserving static IP address...
	I0818 20:08:50.053446   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.053467   73711 main.go:141] libmachine: (no-preload-944426) Reserved static IP address: 192.168.61.228
	I0818 20:08:50.053484   73711 main.go:141] libmachine: (no-preload-944426) DBG | skip adding static IP to network mk-no-preload-944426 - found existing host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"}
	I0818 20:08:50.053498   73711 main.go:141] libmachine: (no-preload-944426) DBG | Getting to WaitForSSH function...
	I0818 20:08:50.053510   73711 main.go:141] libmachine: (no-preload-944426) Waiting for SSH to be available...
	I0818 20:08:50.055459   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055790   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.055822   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055911   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH client type: external
	I0818 20:08:50.055939   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa (-rw-------)
	I0818 20:08:50.055971   73711 main.go:141] libmachine: (no-preload-944426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:50.055986   73711 main.go:141] libmachine: (no-preload-944426) DBG | About to run SSH command:
	I0818 20:08:50.055998   73711 main.go:141] libmachine: (no-preload-944426) DBG | exit 0
	I0818 20:08:50.175717   73711 main.go:141] libmachine: (no-preload-944426) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:50.176077   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetConfigRaw
	I0818 20:08:50.176705   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.179072   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179455   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.179486   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179712   73711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:08:50.179900   73711 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:50.179923   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:50.180128   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.182300   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182679   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.182707   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182822   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.183009   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183138   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183292   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.183455   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.183613   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.183623   73711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.739015   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.741282   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:48.165270   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.166500   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:52.667585   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.284037   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:50.284069   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284354   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:08:50.284383   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284503   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.287412   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287774   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.287814   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287965   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.288164   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288352   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288509   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.288669   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.288869   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.288889   73711 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944426 && echo "no-preload-944426" | sudo tee /etc/hostname
	I0818 20:08:50.407844   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944426
	
	I0818 20:08:50.407877   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.410740   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411115   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.411156   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411402   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.411612   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411760   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411869   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.412073   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.412277   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.412299   73711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944426/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:50.521359   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:50.521388   73711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:50.521456   73711 buildroot.go:174] setting up certificates
	I0818 20:08:50.521467   73711 provision.go:84] configureAuth start
	I0818 20:08:50.521481   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.521824   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.524572   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.524975   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.525002   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.525211   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.527350   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527669   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.527697   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527790   73711 provision.go:143] copyHostCerts
	I0818 20:08:50.527856   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:50.527872   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:50.527924   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:50.528038   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:50.528047   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:50.528065   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:50.528119   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:50.528126   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:50.528143   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:50.528192   73711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.no-preload-944426 san=[127.0.0.1 192.168.61.228 localhost minikube no-preload-944426]
	I0818 20:08:50.740892   73711 provision.go:177] copyRemoteCerts
	I0818 20:08:50.740964   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:50.740991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.743676   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744029   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.744059   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744260   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.744494   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.744681   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.744848   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:50.826364   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:50.858459   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:08:50.890910   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:50.918703   73711 provision.go:87] duration metric: took 397.222917ms to configureAuth
	I0818 20:08:50.918730   73711 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:50.918947   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:50.919029   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.922219   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922549   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.922573   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922762   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.922991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923166   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923300   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.923475   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.923683   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.923700   73711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:51.193561   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:51.193588   73711 machine.go:96] duration metric: took 1.013672792s to provisionDockerMachine
	I0818 20:08:51.193603   73711 start.go:293] postStartSetup for "no-preload-944426" (driver="kvm2")
	I0818 20:08:51.193616   73711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:51.193660   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.194032   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:51.194060   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.196422   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196712   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.196747   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196900   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.197046   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.197157   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.197325   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.279007   73711 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:51.283324   73711 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:51.283344   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:51.283424   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:51.283524   73711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:51.283641   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:51.293489   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:51.317415   73711 start.go:296] duration metric: took 123.797891ms for postStartSetup
	I0818 20:08:51.317455   73711 fix.go:56] duration metric: took 20.58515233s for fixHost
	I0818 20:08:51.317479   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.320161   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320452   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.320481   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320667   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.320853   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321027   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321171   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.321322   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:51.321505   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:51.321517   73711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:51.420193   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011731.395088538
	
	I0818 20:08:51.420216   73711 fix.go:216] guest clock: 1724011731.395088538
	I0818 20:08:51.420223   73711 fix.go:229] Guest: 2024-08-18 20:08:51.395088538 +0000 UTC Remote: 2024-08-18 20:08:51.317459873 +0000 UTC m=+356.082724848 (delta=77.628665ms)
	I0818 20:08:51.420240   73711 fix.go:200] guest clock delta is within tolerance: 77.628665ms
	I0818 20:08:51.420256   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 20.687989837s
	I0818 20:08:51.420273   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.420534   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:51.423567   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.423861   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.423888   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.424052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424528   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424690   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424777   73711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:51.424825   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.424916   73711 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:51.424945   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.427482   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427714   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427786   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.427813   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427962   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428080   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.428109   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.428146   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428283   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428342   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428441   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428532   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.428600   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428707   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.528038   73711 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:51.534231   73711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:51.683823   73711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:51.690823   73711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:51.690901   73711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:51.707356   73711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:51.707389   73711 start.go:495] detecting cgroup driver to use...
	I0818 20:08:51.707459   73711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:51.723884   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:51.737661   73711 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:51.737715   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:51.751187   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:51.764367   73711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:51.881664   73711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:52.022183   73711 docker.go:233] disabling docker service ...
	I0818 20:08:52.022250   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:52.037108   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:52.050404   73711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:52.190167   73711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:52.325569   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:52.339546   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:52.358427   73711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:52.358487   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.369570   73711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:52.369629   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.382786   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.396845   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.407797   73711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:52.418649   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.428822   73711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.445799   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.455730   73711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:52.464898   73711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:52.464951   73711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:52.477249   73711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:52.487204   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:52.608922   73711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:52.753849   73711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:52.753918   73711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:52.759116   73711 start.go:563] Will wait 60s for crictl version
	I0818 20:08:52.759175   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:52.763674   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:52.806016   73711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:52.806106   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.833670   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.864310   73711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:52.865447   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:52.868265   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868667   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:52.868699   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868900   73711 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:52.873656   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:52.887328   73711 kubeadm.go:883] updating cluster {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:52.887505   73711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:52.887553   73711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:52.923999   73711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:52.924025   73711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:52.924090   73711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.924097   73711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.924113   73711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.924147   73711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.924216   73711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.924239   73711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:52.924305   73711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.924390   73711 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.925984   73711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.926002   73711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.925994   73711 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 20:08:52.926011   73711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.926053   73711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.926291   73711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.117679   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157566   73711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0818 20:08:53.157608   73711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157655   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.158464   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.161938   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217317   73711 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0818 20:08:53.217374   73711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.217419   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217427   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.229954   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0818 20:08:53.253154   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.253209   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.261450   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.269598   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.270354   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.270401   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.421994   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 20:08:53.422048   73711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0818 20:08:53.422139   73711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.422182   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.422195   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.422052   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.446061   73711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0818 20:08:53.446101   73711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.446100   73711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0818 20:08:53.446114   73711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0818 20:08:53.446158   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446201   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.446161   73711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.446130   73711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.446250   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446280   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.474921   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.474936   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0818 20:08:53.474953   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474995   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474999   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.505782   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.505904   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.505934   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.799739   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.740857   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.167350   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.168652   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.666744   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.666779   74485 pod_ready.go:82] duration metric: took 11.506987195s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.666802   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671280   74485 pod_ready.go:93] pod "kube-proxy-h8bpj" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.671302   74485 pod_ready.go:82] duration metric: took 4.49242ms for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671311   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675745   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.675765   74485 pod_ready.go:82] duration metric: took 4.446707ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675779   74485 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:55.497054   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.022032642s)
	I0818 20:08:55.497090   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0818 20:08:55.497116   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.022155942s)
	I0818 20:08:55.497157   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.022131358s)
	I0818 20:08:55.497168   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0818 20:08:55.497227   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.497273   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.497313   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.991355489s)
	I0818 20:08:55.497274   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.991406662s)
	I0818 20:08:55.497362   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.497369   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.497393   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (1.991466215s)
	I0818 20:08:55.497409   73711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.697646009s)
	I0818 20:08:55.497439   73711 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0818 20:08:55.497455   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:55.497468   73711 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.497504   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:55.590490   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.608567   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.608583   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.608658   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 20:08:55.608707   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.608728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0818 20:08:55.608741   73711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.608756   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:55.608768   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.660747   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 20:08:55.660856   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:08:55.701347   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 20:08:55.701376   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.701433   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:08:55.717056   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 20:08:55.717159   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:08:59.680640   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.071854332s)
	I0818 20:08:59.680673   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0818 20:08:59.680700   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (4.071919945s)
	I0818 20:08:59.680728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0818 20:08:59.680739   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680755   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (4.019877135s)
	I0818 20:08:59.680781   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0818 20:08:59.680792   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.97939667s)
	I0818 20:08:59.680802   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680818   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (3.979373996s)
	I0818 20:08:59.680833   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0818 20:08:59.680847   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:59.680876   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (3.96370085s)
	I0818 20:08:59.680895   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.241463   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:00.241492   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:59.683057   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:02.183113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:01.753708   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.072881673s)
	I0818 20:09:01.753739   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.072859667s)
	I0818 20:09:01.753786   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 20:09:01.753747   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0818 20:09:01.753866   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:01.753870   73711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:01.753922   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:03.515107   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.761161853s)
	I0818 20:09:03.515136   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0818 20:09:03.515142   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.761255334s)
	I0818 20:09:03.515162   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:03.515170   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0818 20:09:03.515223   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.741235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.910002   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.239901   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.682962   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.183678   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:05.463531   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.948279133s)
	I0818 20:09:05.463559   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0818 20:09:05.463585   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:05.463629   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:07.525332   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.061676855s)
	I0818 20:09:07.525365   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0818 20:09:07.525401   73711 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:07.525473   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:08.178855   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 20:09:08.178894   73711 cache_images.go:123] Successfully loaded all cached images
	I0818 20:09:08.178900   73711 cache_images.go:92] duration metric: took 15.254860831s to LoadCachedImages
	I0818 20:09:08.178915   73711 kubeadm.go:934] updating node { 192.168.61.228 8443 v1.31.0 crio true true} ...
	I0818 20:09:08.179070   73711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:09:08.179163   73711 ssh_runner.go:195] Run: crio config
	I0818 20:09:08.229392   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:08.229418   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:08.229429   73711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:09:08.229453   73711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.228 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944426 NodeName:no-preload-944426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:09:08.229598   73711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944426"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:09:08.229657   73711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:09:08.240023   73711 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:09:08.240121   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:09:08.249808   73711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 20:09:08.266663   73711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:09:08.284042   73711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0818 20:09:08.302210   73711 ssh_runner.go:195] Run: grep 192.168.61.228	control-plane.minikube.internal$ /etc/hosts
	I0818 20:09:08.306321   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:09:08.318674   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:08.437701   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:08.462861   73711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426 for IP: 192.168.61.228
	I0818 20:09:08.462889   73711 certs.go:194] generating shared ca certs ...
	I0818 20:09:08.462909   73711 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:08.463099   73711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:09:08.463166   73711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:09:08.463178   73711 certs.go:256] generating profile certs ...
	I0818 20:09:08.463297   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/client.key
	I0818 20:09:08.463400   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key.ec9e396f
	I0818 20:09:08.463459   73711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key
	I0818 20:09:08.463622   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:09:08.463663   73711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:09:08.463676   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:09:08.463718   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:09:08.463748   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:09:08.463780   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:09:08.463827   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:09:08.464500   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:09:08.497860   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:09:08.550536   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:09:08.593972   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:09:08.625691   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 20:09:08.652285   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:09:08.676175   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:09:08.703870   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:09:08.729102   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:09:08.758017   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:09:08.783528   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:09:08.808211   73711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:09:08.825465   73711 ssh_runner.go:195] Run: openssl version
	I0818 20:09:08.831856   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:09:08.843336   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847774   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847824   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.854110   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:09:08.865279   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:09:08.876107   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880723   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880786   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.886526   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:09:08.898139   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:09:08.909258   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.913957   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.914015   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.919888   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:09:08.933118   73711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:09:08.937979   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:09:08.944427   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:09:08.950686   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:09:08.956949   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:09:08.963201   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:09:08.969284   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:09:08.975411   73711 kubeadm.go:392] StartCluster: {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:09:08.975501   73711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:09:08.975543   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.019794   73711 cri.go:89] found id: ""
	I0818 20:09:09.019859   73711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:09:09.030614   73711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:09:09.030635   73711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:09:09.030689   73711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:09:09.041513   73711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:09:09.042532   73711 kubeconfig.go:125] found "no-preload-944426" server: "https://192.168.61.228:8443"
	I0818 20:09:09.044606   73711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:09:09.054823   73711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.228
	I0818 20:09:09.054855   73711 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:09:09.054867   73711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:09:09.054919   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.096324   73711 cri.go:89] found id: ""
	I0818 20:09:09.096412   73711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:09:09.112752   73711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:09:09.122515   73711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:09:09.122537   73711 kubeadm.go:157] found existing configuration files:
	
	I0818 20:09:09.122578   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:09:09.131551   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:09:09.131604   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:09:09.140888   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:09:09.149865   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:09:09.149920   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:09:09.159008   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.168220   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:09:09.168279   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.177638   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:09:09.187508   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:09:09.187567   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:09:09.196657   73711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:09:09.206117   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:09.331465   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.242594   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.738983   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:09.682305   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.683106   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:10.574796   73711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243293266s)
	I0818 20:09:10.574822   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.778850   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.843088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.931752   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:09:10.931846   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.432245   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.932577   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.948423   73711 api_server.go:72] duration metric: took 1.016687944s to wait for apiserver process to appear ...
	I0818 20:09:11.948449   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:09:11.948477   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:11.948946   73711 api_server.go:269] stopped: https://192.168.61.228:8443/healthz: Get "https://192.168.61.228:8443/healthz": dial tcp 192.168.61.228:8443: connect: connection refused
	I0818 20:09:12.448725   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.739963   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.739993   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.740010   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.750388   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.750411   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.948679   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.956174   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:14.956205   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.449273   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.453840   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.453870   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:15.949138   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.958790   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.958813   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:16.449521   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:16.453975   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:09:16.460298   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:09:16.460323   73711 api_server.go:131] duration metric: took 4.511867816s to wait for apiserver health ...
	I0818 20:09:16.460330   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:16.460339   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:16.462141   73711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:09:13.740020   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.238126   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:13.683910   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.182408   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.463457   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:09:16.474867   73711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:09:16.494479   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:09:16.502870   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:09:16.502898   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:09:16.502906   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:09:16.502917   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:09:16.502926   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:09:16.502937   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:09:16.502951   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:09:16.502959   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:09:16.502964   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:09:16.502970   73711 system_pods.go:74] duration metric: took 8.468743ms to wait for pod list to return data ...
	I0818 20:09:16.502977   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:09:16.507863   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:09:16.507884   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:09:16.507893   73711 node_conditions.go:105] duration metric: took 4.912203ms to run NodePressure ...
	I0818 20:09:16.507907   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:16.779765   73711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790746   73711 kubeadm.go:739] kubelet initialised
	I0818 20:09:16.790771   73711 kubeadm.go:740] duration metric: took 10.982299ms waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790780   73711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:16.799544   73711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.806805   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806826   73711 pod_ready.go:82] duration metric: took 7.251632ms for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.806835   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806841   73711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.813614   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813646   73711 pod_ready.go:82] duration metric: took 6.794013ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.813656   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813664   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.818982   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819016   73711 pod_ready.go:82] duration metric: took 5.338981ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.819028   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819037   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.898401   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898433   73711 pod_ready.go:82] duration metric: took 79.37927ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.898446   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898454   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.297663   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297697   73711 pod_ready.go:82] duration metric: took 399.23365ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.297706   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297712   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.697884   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697909   73711 pod_ready.go:82] duration metric: took 400.191092ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.697919   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697925   73711 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:18.099008   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099034   73711 pod_ready.go:82] duration metric: took 401.09908ms for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:18.099044   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099050   73711 pod_ready.go:39] duration metric: took 1.30825923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:18.099071   73711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:09:18.111862   73711 ops.go:34] apiserver oom_adj: -16
	I0818 20:09:18.111888   73711 kubeadm.go:597] duration metric: took 9.081245207s to restartPrimaryControlPlane
	I0818 20:09:18.111901   73711 kubeadm.go:394] duration metric: took 9.136525478s to StartCluster
	I0818 20:09:18.111931   73711 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.112017   73711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:09:18.114460   73711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.114771   73711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:09:18.114885   73711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:09:18.114987   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:09:18.115022   73711 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944426"
	I0818 20:09:18.115036   73711 addons.go:69] Setting default-storageclass=true in profile "no-preload-944426"
	I0818 20:09:18.115059   73711 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944426"
	I0818 20:09:18.115075   73711 addons.go:69] Setting metrics-server=true in profile "no-preload-944426"
	W0818 20:09:18.115082   73711 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:09:18.115095   73711 addons.go:234] Setting addon metrics-server=true in "no-preload-944426"
	I0818 20:09:18.115067   73711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944426"
	W0818 20:09:18.115104   73711 addons.go:243] addon metrics-server should already be in state true
	I0818 20:09:18.115122   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115132   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115517   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115530   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115541   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115553   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115560   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115592   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.117511   73711 out.go:177] * Verifying Kubernetes components...
	I0818 20:09:18.118740   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:18.133596   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0818 20:09:18.134093   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.134661   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.134685   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.135066   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.135263   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.136138   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0818 20:09:18.136520   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.136981   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.137004   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.137353   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.137911   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.137957   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.138952   73711 addons.go:234] Setting addon default-storageclass=true in "no-preload-944426"
	W0818 20:09:18.138975   73711 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:09:18.139001   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.139356   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.139413   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.155618   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0818 20:09:18.156076   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.156666   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.156687   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.157086   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.157669   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.157700   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.158080   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0818 20:09:18.158422   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.158850   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.158868   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.158888   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0818 20:09:18.159237   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.159282   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.159455   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.159741   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.159763   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.160108   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.160582   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.160606   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.165108   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.166977   73711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:09:18.168139   73711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.168156   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:09:18.168174   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.171426   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172004   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.172041   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172082   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.172238   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.172336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.172423   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.175961   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0818 20:09:18.176421   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.176543   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0818 20:09:18.176861   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.176875   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.177065   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.177176   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.177345   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.177745   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.177762   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.178162   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.178336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.179445   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180238   73711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.180253   73711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:09:18.180275   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.181198   73711 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:09:18.182420   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:09:18.182447   73711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:09:18.182464   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.183457   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183499   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.183513   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183656   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.183820   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.183953   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.184112   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.185260   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185575   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.185588   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185754   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.185879   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.186013   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.186099   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.338778   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:18.356229   73711 node_ready.go:35] waiting up to 6m0s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:18.496927   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:09:18.496949   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:09:18.513205   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.540482   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:09:18.540505   73711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:09:18.544078   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.613315   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:18.613340   73711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:09:18.668416   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:19.638171   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094064475s)
	I0818 20:09:19.638274   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638299   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638177   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124933278s)
	I0818 20:09:19.638328   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638343   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638281   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638412   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638697   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638714   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638724   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638732   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638825   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638845   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638853   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638932   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638946   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638966   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638994   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639006   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638893   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.639016   73711 addons.go:475] Verifying addon metrics-server=true in "no-preload-944426"
	I0818 20:09:19.639024   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.639227   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.639401   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639416   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.640889   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.640905   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.640973   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647148   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.647169   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.647416   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.647460   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647448   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.649397   73711 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0818 20:09:19.650643   73711 addons.go:510] duration metric: took 1.535758897s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:18.240003   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.738804   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:18.184836   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.682274   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.360319   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:22.860498   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:22.739478   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.238989   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.239357   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:23.181992   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.182417   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.183071   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.360413   73711 node_ready.go:49] node "no-preload-944426" has status "Ready":"True"
	I0818 20:09:25.360449   73711 node_ready.go:38] duration metric: took 7.004187421s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:25.360462   73711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:25.366498   73711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:27.373766   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.873098   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:29.738977   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.238945   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.683136   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.182825   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:31.873262   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.372828   73711 pod_ready.go:93] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.372849   73711 pod_ready.go:82] duration metric: took 7.006326702s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.372858   73711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376709   73711 pod_ready.go:93] pod "etcd-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.376732   73711 pod_ready.go:82] duration metric: took 3.867173ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376743   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380703   73711 pod_ready.go:93] pod "kube-apiserver-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.380722   73711 pod_ready.go:82] duration metric: took 3.970732ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380733   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385137   73711 pod_ready.go:93] pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.385159   73711 pod_ready.go:82] duration metric: took 4.417483ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385171   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390646   73711 pod_ready.go:93] pod "kube-proxy-2l6g8" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.390702   73711 pod_ready.go:82] duration metric: took 5.522399ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390713   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772352   73711 pod_ready.go:93] pod "kube-scheduler-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.772374   73711 pod_ready.go:82] duration metric: took 381.654122ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772384   73711 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:34.779615   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:34.239095   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.738310   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:34.683291   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:37.181676   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.780369   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.278364   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:38.740139   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.238400   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.181867   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.182275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.781697   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:43.239274   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.739280   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.182744   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.684210   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:46.278261   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.279139   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:47.739502   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.239148   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.181581   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.181939   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.182295   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.779145   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.779195   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.779440   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:52.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:55.239445   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.682549   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.182480   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.278685   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.279456   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.739205   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.739780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:02.240023   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.182638   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.182974   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.778759   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:04.279122   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.240223   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.738829   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:03.183498   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:05.681527   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.682456   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.281289   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:08.778838   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:09.238297   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:11.240258   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.182214   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:12.182485   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.778909   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.279849   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:13.739554   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.240247   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:14.182841   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.682427   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:15.778555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:17.779315   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:18.739993   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.241320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:19.182059   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.681310   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:20.278905   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.779594   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:23.739354   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.739406   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:23.682161   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.683137   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.279240   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:27.778736   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.238417   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.240302   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:28.182371   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.182431   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.182538   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.277705   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.778467   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.739086   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:35.239575   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.682089   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:36.682424   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.277613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:39.778047   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:37.743430   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.240404   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:38.682667   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.182697   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.779091   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.780368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:42.739140   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:44.739349   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.739985   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.681897   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:45.682347   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.682385   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.278368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:48.777847   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:49.238888   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:51.239699   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.182672   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.683492   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.778683   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.778843   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:53.240375   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.738235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.181390   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.181940   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.278430   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.278961   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.778449   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:57.740046   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:00.239797   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.182402   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.182516   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.779677   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.277808   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.740702   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:07.239660   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:03.682270   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.682964   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:06.278085   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.781247   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:09.240113   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.739320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.182148   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:10.681873   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:12.682490   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.278330   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:13.278868   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:14.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:16.739103   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.688049   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.779050   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.779324   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:19.238499   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:21.239793   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.181477   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.181514   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.278360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.779285   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:23.739816   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.243360   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:24.182434   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.182980   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:25.277558   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.278088   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:29.278655   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:28.739673   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.238716   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:28.681620   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:30.682755   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:32.682969   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.778900   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.240237   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.739311   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.182357   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:37.682275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.278357   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:38.279370   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:38.239229   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.240416   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:39.682587   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.187237   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.779225   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.779359   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.779661   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:42.739642   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.240529   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.681898   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:46.683048   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:47.283360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.780428   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:47.739118   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.740073   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.238919   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.182997   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.682384   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.279233   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:54.779470   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.239638   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:56.240123   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:53.682822   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:55.683248   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.279035   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:59.779371   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:58.240182   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.240387   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:58.182085   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.682855   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:02.278506   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:04.279952   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:02.739623   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.239503   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.240563   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:03.182703   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.183099   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.682942   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:06.780051   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:09.283183   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:09.738372   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.738978   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:10.183297   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:12.682617   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.778895   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.779613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:14.239433   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:14.733714   73815 pod_ready.go:82] duration metric: took 4m0.000909376s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:14.733756   73815 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:14.733773   73815 pod_ready.go:39] duration metric: took 4m10.006922238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:14.733798   73815 kubeadm.go:597] duration metric: took 4m18.227938977s to restartPrimaryControlPlane
	W0818 20:12:14.733854   73815 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:14.733884   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:15.182539   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:17.682113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.278810   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:18.279513   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:19.682712   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.182462   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:20.777741   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.779468   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:24.785394   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:24.682001   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.182491   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.278406   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.279272   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.682104   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:32.181795   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:31.779163   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:33.782706   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:34.183088   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.682409   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.278136   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:38.278938   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.943045   73815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.209137834s)
	I0818 20:12:40.943131   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:40.961902   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:40.984956   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:41.000828   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:41.000855   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:41.000908   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:41.019730   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:41.019782   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:41.031694   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:41.052082   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:41.052133   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:41.061682   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.070983   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:41.071036   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.083122   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:41.092977   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:41.093041   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:41.103081   73815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:41.155300   73815 kubeadm.go:310] W0818 20:12:41.112032    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.156131   73815 kubeadm.go:310] W0818 20:12:41.113028    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.270071   73815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:39.183290   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:41.682301   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.777979   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:42.779754   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:44.779992   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:43.683501   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:46.181489   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.616338   73815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:12:49.616432   73815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:12:49.616546   73815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:12:49.616675   73815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:12:49.616784   73815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:12:49.616877   73815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:12:49.618287   73815 out.go:235]   - Generating certificates and keys ...
	I0818 20:12:49.618354   73815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:12:49.618414   73815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:12:49.618486   73815 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:12:49.618537   73815 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:12:49.618598   73815 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:12:49.618648   73815 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:12:49.618700   73815 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:12:49.618779   73815 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:12:49.618892   73815 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:12:49.619007   73815 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:12:49.619065   73815 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:12:49.619163   73815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:12:49.619214   73815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:12:49.619269   73815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:12:49.619331   73815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:12:49.619436   73815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:12:49.619486   73815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:12:49.619556   73815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:12:49.619619   73815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:12:49.621003   73815 out.go:235]   - Booting up control plane ...
	I0818 20:12:49.621109   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:12:49.621195   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:12:49.621272   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:12:49.621380   73815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:12:49.621464   73815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:12:49.621507   73815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:12:49.621621   73815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:12:49.621715   73815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:12:49.621773   73815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.427168ms
	I0818 20:12:49.621843   73815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:12:49.621894   73815 kubeadm.go:310] [api-check] The API server is healthy after 5.00297116s
	I0818 20:12:49.621989   73815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:12:49.622127   73815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:12:49.622192   73815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:12:49.622366   73815 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-291295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:12:49.622416   73815 kubeadm.go:310] [bootstrap-token] Using token: y7e2le.i0q1jk5v0c0u0zuw
	I0818 20:12:49.623896   73815 out.go:235]   - Configuring RBAC rules ...
	I0818 20:12:49.623979   73815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:12:49.624091   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:12:49.624245   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:12:49.624354   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:12:49.624455   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:12:49.624526   73815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:12:49.624621   73815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:12:49.624675   73815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:12:49.624718   73815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:12:49.624724   73815 kubeadm.go:310] 
	I0818 20:12:49.624819   73815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:12:49.624835   73815 kubeadm.go:310] 
	I0818 20:12:49.624933   73815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:12:49.624943   73815 kubeadm.go:310] 
	I0818 20:12:49.624975   73815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:12:49.625066   73815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:12:49.625122   73815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:12:49.625135   73815 kubeadm.go:310] 
	I0818 20:12:49.625210   73815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:12:49.625217   73815 kubeadm.go:310] 
	I0818 20:12:49.625285   73815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:12:49.625295   73815 kubeadm.go:310] 
	I0818 20:12:49.625364   73815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:12:49.625469   73815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:12:49.625552   73815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:12:49.625563   73815 kubeadm.go:310] 
	I0818 20:12:49.625675   73815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:12:49.625756   73815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:12:49.625763   73815 kubeadm.go:310] 
	I0818 20:12:49.625858   73815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.625943   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:12:49.625967   73815 kubeadm.go:310] 	--control-plane 
	I0818 20:12:49.625976   73815 kubeadm.go:310] 
	I0818 20:12:49.626089   73815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:12:49.626099   73815 kubeadm.go:310] 
	I0818 20:12:49.626196   73815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.626293   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:12:49.626302   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:12:49.626308   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:12:49.627714   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:12:47.280266   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.779502   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.628998   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:12:49.639640   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:12:49.657017   73815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-291295 minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=embed-certs-291295 minikube.k8s.io/primary=true
	I0818 20:12:49.685420   73815 ops.go:34] apiserver oom_adj: -16
	I0818 20:12:49.868146   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.368174   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.868256   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.368427   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.868632   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:52.368585   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:48.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:50.681743   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.683179   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.869122   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.368635   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.869162   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.368223   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.490893   73815 kubeadm.go:1113] duration metric: took 4.833865719s to wait for elevateKubeSystemPrivileges
	I0818 20:12:54.490919   73815 kubeadm.go:394] duration metric: took 4m58.032922921s to StartCluster
	I0818 20:12:54.490936   73815 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.491011   73815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:12:54.492769   73815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.493007   73815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:12:54.493069   73815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:12:54.493160   73815 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-291295"
	I0818 20:12:54.493186   73815 addons.go:69] Setting default-storageclass=true in profile "embed-certs-291295"
	I0818 20:12:54.493208   73815 addons.go:69] Setting metrics-server=true in profile "embed-certs-291295"
	I0818 20:12:54.493226   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:12:54.493234   73815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-291295"
	I0818 20:12:54.493250   73815 addons.go:234] Setting addon metrics-server=true in "embed-certs-291295"
	W0818 20:12:54.493263   73815 addons.go:243] addon metrics-server should already be in state true
	I0818 20:12:54.493293   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493197   73815 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-291295"
	W0818 20:12:54.493423   73815 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:12:54.493454   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493667   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493695   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493799   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493824   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493839   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493856   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.494988   73815 out.go:177] * Verifying Kubernetes components...
	I0818 20:12:54.496631   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0818 20:12:54.510362   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0818 20:12:54.510861   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510893   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510904   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.511362   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511394   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511392   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511411   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511512   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511532   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511721   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511770   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511858   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.512040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.512246   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512269   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512275   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.512287   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.515662   73815 addons.go:234] Setting addon default-storageclass=true in "embed-certs-291295"
	W0818 20:12:54.515684   73815 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:12:54.515713   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.516066   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.516113   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.532752   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0818 20:12:54.532798   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0818 20:12:54.533454   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.533570   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.534099   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534122   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534237   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534256   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534374   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534590   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534626   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.534665   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0818 20:12:54.534909   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.535373   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.535793   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.535808   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.536326   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.536411   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.536941   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.538860   73815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:12:54.538862   73815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:12:52.279487   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.279652   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.539061   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.539290   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.540006   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:12:54.540024   73815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:12:54.540043   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.540104   73815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.540119   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:12:54.540144   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.543782   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544017   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544131   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544154   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544293   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544565   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.544734   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.544754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544887   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.545060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.545257   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.545502   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.558292   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0818 20:12:54.558721   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.559184   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.559200   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.559579   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.559764   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.561412   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.562138   73815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.562153   73815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:12:54.562169   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.565078   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565524   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.565543   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565782   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.565954   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.566107   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.566265   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.738286   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:12:54.804581   73815 node_ready.go:35] waiting up to 6m0s for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813953   73815 node_ready.go:49] node "embed-certs-291295" has status "Ready":"True"
	I0818 20:12:54.813984   73815 node_ready.go:38] duration metric: took 9.367719ms for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813995   73815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:54.820670   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:12:54.884787   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:12:54.884808   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:12:54.891500   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.917894   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.939854   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:12:54.939873   73815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:12:55.023663   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:55.023684   73815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:12:55.049846   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:56.106099   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.188173933s)
	I0818 20:12:56.106164   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106173   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106502   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106504   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.106519   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.106529   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106537   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106774   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106788   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107412   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21588373s)
	I0818 20:12:56.107447   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107459   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.107656   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.107729   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.107739   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107747   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.108054   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.108095   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.108105   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.163788   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.163816   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.164087   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.164137   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239269   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189381338s)
	I0818 20:12:56.239327   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239341   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.239712   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.239767   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239748   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.239782   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239792   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.240000   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.240017   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.240028   73815 addons.go:475] Verifying addon metrics-server=true in "embed-certs-291295"
	I0818 20:12:56.241750   73815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:12:56.243157   73815 addons.go:510] duration metric: took 1.750082977s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:12:56.827912   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:55.184449   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:57.676039   74485 pod_ready.go:82] duration metric: took 4m0.000245975s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:57.676064   74485 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:57.676106   74485 pod_ready.go:39] duration metric: took 4m11.533331444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:57.676138   74485 kubeadm.go:597] duration metric: took 4m20.628972956s to restartPrimaryControlPlane
	W0818 20:12:57.676203   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:57.676230   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:56.778171   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:58.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:59.328683   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.331560   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.281134   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.281507   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.828543   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.828572   73815 pod_ready.go:82] duration metric: took 9.007869564s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.828586   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833396   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.833416   73815 pod_ready.go:82] duration metric: took 4.823533ms for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833426   73815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837837   73815 pod_ready.go:93] pod "etcd-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.837856   73815 pod_ready.go:82] duration metric: took 4.422926ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837864   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842646   73815 pod_ready.go:93] pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.842666   73815 pod_ready.go:82] duration metric: took 4.795789ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842675   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846697   73815 pod_ready.go:93] pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.846721   73815 pod_ready.go:82] duration metric: took 4.038999ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846733   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224066   73815 pod_ready.go:93] pod "kube-proxy-8mv85" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.224088   73815 pod_ready.go:82] duration metric: took 377.347897ms for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224097   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624310   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.624337   73815 pod_ready.go:82] duration metric: took 400.233574ms for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624347   73815 pod_ready.go:39] duration metric: took 9.810340936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:04.624363   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:04.624440   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:04.640514   73815 api_server.go:72] duration metric: took 10.147475745s to wait for apiserver process to appear ...
	I0818 20:13:04.640543   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:04.640565   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:13:04.646120   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:13:04.646969   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:04.646989   73815 api_server.go:131] duration metric: took 6.438722ms to wait for apiserver health ...
	I0818 20:13:04.646999   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:04.828347   73815 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:04.828385   73815 system_pods.go:61] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:04.828393   73815 system_pods.go:61] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:04.828398   73815 system_pods.go:61] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:04.828403   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:04.828416   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:04.828420   73815 system_pods.go:61] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:04.828425   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:04.828434   73815 system_pods.go:61] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:04.828441   73815 system_pods.go:61] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:04.828453   73815 system_pods.go:74] duration metric: took 181.44906ms to wait for pod list to return data ...
	I0818 20:13:04.828465   73815 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:05.030945   73815 default_sa.go:45] found service account: "default"
	I0818 20:13:05.030971   73815 default_sa.go:55] duration metric: took 202.497269ms for default service account to be created ...
	I0818 20:13:05.030981   73815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:05.226724   73815 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:05.226760   73815 system_pods.go:89] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:05.226769   73815 system_pods.go:89] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:05.226775   73815 system_pods.go:89] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:05.226781   73815 system_pods.go:89] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:05.226790   73815 system_pods.go:89] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:05.226795   73815 system_pods.go:89] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:05.226801   73815 system_pods.go:89] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:05.226810   73815 system_pods.go:89] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:05.226820   73815 system_pods.go:89] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:05.226831   73815 system_pods.go:126] duration metric: took 195.843628ms to wait for k8s-apps to be running ...
	I0818 20:13:05.226843   73815 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:05.226892   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:05.242656   73815 system_svc.go:56] duration metric: took 15.80684ms WaitForService to wait for kubelet
	I0818 20:13:05.242681   73815 kubeadm.go:582] duration metric: took 10.749648174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:05.242698   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:05.424616   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:05.424642   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:05.424654   73815 node_conditions.go:105] duration metric: took 181.951421ms to run NodePressure ...
	I0818 20:13:05.424668   73815 start.go:241] waiting for startup goroutines ...
	I0818 20:13:05.424678   73815 start.go:246] waiting for cluster config update ...
	I0818 20:13:05.424692   73815 start.go:255] writing updated cluster config ...
	I0818 20:13:05.425003   73815 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:05.470859   73815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:05.472909   73815 out.go:177] * Done! kubectl is now configured to use "embed-certs-291295" cluster and "default" namespace by default
	I0818 20:13:05.779555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:07.783567   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:10.281617   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:12.780570   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:15.282024   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:17.779399   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:23.788389   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.112134895s)
	I0818 20:13:23.788470   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:23.808611   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:13:23.820139   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:13:23.837253   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:13:23.837282   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:13:23.837345   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:13:23.848522   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:13:23.848595   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:13:23.857891   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:13:23.866756   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:13:23.866814   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:13:23.876332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.885435   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:13:23.885535   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.896120   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:13:23.905471   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:13:23.905565   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:13:23.915157   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:13:23.963756   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:13:23.963830   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:13:24.083423   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:13:24.083592   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:13:24.083733   74485 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:13:24.097967   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:13:24.099859   74485 out.go:235]   - Generating certificates and keys ...
	I0818 20:13:24.099926   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:13:24.100020   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:13:24.100125   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:13:24.100212   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:13:24.100310   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:13:24.100389   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:13:24.100476   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:13:24.100592   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:13:24.100711   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:13:24.100829   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:13:24.100891   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:13:24.100978   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:13:24.298737   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:13:24.592511   74485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:13:24.686316   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:13:24.796124   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:13:24.910646   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:13:24.911060   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:13:24.913486   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:13:20.281479   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:22.779269   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:24.914894   74485 out.go:235]   - Booting up control plane ...
	I0818 20:13:24.915018   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:13:24.915106   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:13:24.915303   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:13:24.938289   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:13:24.944304   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:13:24.944367   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:13:25.078685   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:13:25.078813   74485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:13:25.580725   74485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092954ms
	I0818 20:13:25.580847   74485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:13:25.280695   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:27.285875   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:29.779058   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:30.583574   74485 kubeadm.go:310] [api-check] The API server is healthy after 5.001121585s
	I0818 20:13:30.596453   74485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:13:30.616459   74485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:13:30.647753   74485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:13:30.648063   74485 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-852598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:13:30.661702   74485 kubeadm.go:310] [bootstrap-token] Using token: zx02gp.uvda3nvhhfc3i2l5
	I0818 20:13:30.663166   74485 out.go:235]   - Configuring RBAC rules ...
	I0818 20:13:30.663321   74485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:13:30.671440   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:13:30.682462   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:13:30.690376   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:13:30.699091   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:13:30.704304   74485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:13:30.989576   74485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:13:31.435191   74485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:13:31.989155   74485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:13:31.991090   74485 kubeadm.go:310] 
	I0818 20:13:31.991172   74485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:13:31.991188   74485 kubeadm.go:310] 
	I0818 20:13:31.991285   74485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:13:31.991303   74485 kubeadm.go:310] 
	I0818 20:13:31.991337   74485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:13:31.991506   74485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:13:31.991584   74485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:13:31.991605   74485 kubeadm.go:310] 
	I0818 20:13:31.991710   74485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:13:31.991732   74485 kubeadm.go:310] 
	I0818 20:13:31.991802   74485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:13:31.991814   74485 kubeadm.go:310] 
	I0818 20:13:31.991881   74485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:13:31.991986   74485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:13:31.992101   74485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:13:31.992132   74485 kubeadm.go:310] 
	I0818 20:13:31.992250   74485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:13:31.992345   74485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:13:31.992358   74485 kubeadm.go:310] 
	I0818 20:13:31.992464   74485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.992601   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:13:31.992637   74485 kubeadm.go:310] 	--control-plane 
	I0818 20:13:31.992650   74485 kubeadm.go:310] 
	I0818 20:13:31.992760   74485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:13:31.992778   74485 kubeadm.go:310] 
	I0818 20:13:31.992882   74485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.993030   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:13:31.994898   74485 kubeadm.go:310] W0818 20:13:23.918436    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995217   74485 kubeadm.go:310] W0818 20:13:23.919152    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995365   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:13:31.995413   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:13:31.995423   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:13:31.997188   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:13:31.998506   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:13:32.011472   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:13:32.031405   74485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:13:32.031449   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.031494   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-852598 minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=default-k8s-diff-port-852598 minikube.k8s.io/primary=true
	I0818 20:13:32.244997   74485 ops.go:34] apiserver oom_adj: -16
	I0818 20:13:32.245096   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.745775   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.279538   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:32.779152   73711 pod_ready.go:82] duration metric: took 4m0.006755386s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:13:32.779180   73711 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 20:13:32.779190   73711 pod_ready.go:39] duration metric: took 4m7.418715902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:32.779207   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:32.779240   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:32.779298   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:32.848109   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:32.848132   73711 cri.go:89] found id: ""
	I0818 20:13:32.848141   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:32.848201   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.852725   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:32.852789   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:32.899932   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:32.899957   73711 cri.go:89] found id: ""
	I0818 20:13:32.899969   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:32.900028   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.904698   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:32.904771   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:32.945320   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:32.945347   73711 cri.go:89] found id: ""
	I0818 20:13:32.945355   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:32.945411   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.949873   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:32.949935   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:32.986388   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:32.986409   73711 cri.go:89] found id: ""
	I0818 20:13:32.986415   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:32.986465   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.992213   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:32.992292   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:33.035535   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:33.035557   73711 cri.go:89] found id: ""
	I0818 20:13:33.035564   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:33.035622   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.039933   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:33.040006   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:33.077372   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:33.077395   73711 cri.go:89] found id: ""
	I0818 20:13:33.077404   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:33.077468   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.082254   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:33.082327   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:33.120142   73711 cri.go:89] found id: ""
	I0818 20:13:33.120181   73711 logs.go:276] 0 containers: []
	W0818 20:13:33.120192   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:33.120199   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:33.120267   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:33.159065   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:33.159089   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.159095   73711 cri.go:89] found id: ""
	I0818 20:13:33.159104   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:33.159164   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.163366   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.167301   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:33.167327   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:33.207982   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:33.208012   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:33.734525   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:33.734563   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:33.779286   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:33.779334   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:33.915330   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:33.915365   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:33.930057   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:33.930088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:33.978282   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:33.978312   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:34.021464   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:34.021495   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:34.058242   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:34.058271   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:34.094203   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:34.094231   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:34.157812   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:34.157849   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:34.196259   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:34.196288   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:34.273774   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:34.273818   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.245388   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:33.745166   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.245920   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.745548   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.245436   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.745269   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.245383   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.384146   74485 kubeadm.go:1113] duration metric: took 4.352781371s to wait for elevateKubeSystemPrivileges
	I0818 20:13:36.384182   74485 kubeadm.go:394] duration metric: took 4m59.395903283s to StartCluster
	I0818 20:13:36.384199   74485 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.384286   74485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:13:36.385964   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.386201   74485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:13:36.386320   74485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:13:36.386400   74485 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386423   74485 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386440   74485 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386458   74485 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386470   74485 addons.go:243] addon metrics-server should already be in state true
	I0818 20:13:36.386477   74485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-852598"
	I0818 20:13:36.386514   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386434   74485 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386567   74485 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:13:36.386612   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386435   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:13:36.386858   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386887   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386915   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386948   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386982   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.387015   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.387748   74485 out.go:177] * Verifying Kubernetes components...
	I0818 20:13:36.389177   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:13:36.402895   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0818 20:13:36.402928   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0818 20:13:36.403477   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.403479   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404111   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404120   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404519   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404525   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404795   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.405161   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.405192   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.405739   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0818 20:13:36.406246   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.406753   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.406779   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.407167   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.407726   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.407771   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.408687   74485 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.408710   74485 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:13:36.408736   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.409073   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.409120   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.423471   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 20:13:36.423953   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.424569   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.424588   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.424652   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0818 20:13:36.424966   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.425039   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.425257   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.425447   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.425462   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.425911   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.426098   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.427104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.427772   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.428108   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0818 20:13:36.428438   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.428794   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.428816   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.429092   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.429645   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.429696   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.429708   74485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:13:36.429758   74485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:13:36.431859   74485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.431879   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:13:36.431898   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.431958   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:13:36.431969   74485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:13:36.431983   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.435295   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435730   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.435757   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436192   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.436238   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.436254   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.436312   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.436528   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436570   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.436890   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.437171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.437355   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.447762   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0818 20:13:36.448303   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.448694   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.448713   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.449011   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.449160   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.450722   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.450918   74485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.450935   74485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:13:36.450954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.453529   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.453969   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.453992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.454163   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.454862   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.455104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.455246   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.606178   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:13:36.628852   74485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702927   74485 node_ready.go:49] node "default-k8s-diff-port-852598" has status "Ready":"True"
	I0818 20:13:36.702956   74485 node_ready.go:38] duration metric: took 74.077289ms for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702968   74485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:36.713446   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:36.726670   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:13:36.726689   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:13:36.741673   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.784451   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.790772   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:13:36.790798   74485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:13:36.845289   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:36.845315   74485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:13:36.914259   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:37.542511   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542538   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542559   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542543   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542874   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542914   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542922   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542932   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542941   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542953   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542963   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542971   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.543114   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.543123   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545016   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.545041   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545059   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572618   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.572643   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.572953   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572976   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.572989   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.793891   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.793918   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794436   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.794453   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794467   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794479   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.794487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794747   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794762   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794774   74485 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-852598"
	I0818 20:13:37.796423   74485 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:13:36.814874   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:36.838208   73711 api_server.go:72] duration metric: took 4m18.723396382s to wait for apiserver process to appear ...
	I0818 20:13:36.838234   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:36.838276   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:36.838334   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:36.890010   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:36.890036   73711 cri.go:89] found id: ""
	I0818 20:13:36.890046   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:36.890108   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.895675   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:36.895753   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:36.953110   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:36.953162   73711 cri.go:89] found id: ""
	I0818 20:13:36.953172   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:36.953230   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.959359   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:36.959456   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:37.011217   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.011248   73711 cri.go:89] found id: ""
	I0818 20:13:37.011258   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:37.011333   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.016895   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:37.016988   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:37.067705   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:37.067728   73711 cri.go:89] found id: ""
	I0818 20:13:37.067737   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:37.067794   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.073259   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:37.073332   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:37.112192   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.112216   73711 cri.go:89] found id: ""
	I0818 20:13:37.112226   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:37.112285   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.116988   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:37.117060   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:37.153720   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.153744   73711 cri.go:89] found id: ""
	I0818 20:13:37.153753   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:37.153811   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.158160   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:37.158226   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:37.197088   73711 cri.go:89] found id: ""
	I0818 20:13:37.197120   73711 logs.go:276] 0 containers: []
	W0818 20:13:37.197143   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:37.197151   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:37.197215   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:37.241214   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:37.241242   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:37.241248   73711 cri.go:89] found id: ""
	I0818 20:13:37.241257   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:37.241317   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.246159   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.250431   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:37.250460   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:37.313787   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:37.313817   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:37.333235   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:37.333263   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:37.461197   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:37.461236   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.505314   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:37.505343   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.576096   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:37.576121   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:38.083667   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:38.083702   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:38.128922   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:38.128947   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:38.170807   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:38.170842   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:38.265750   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:38.265784   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:38.323224   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:38.323269   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:38.372486   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:38.372530   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:38.413945   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:38.413986   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.798152   74485 addons.go:510] duration metric: took 1.411833485s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:13:38.719805   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.720446   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:40.720472   74485 pod_ready.go:82] duration metric: took 4.00699808s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:40.720482   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:42.728159   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.955186   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:13:40.960201   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:13:40.961240   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:40.961260   73711 api_server.go:131] duration metric: took 4.123017717s to wait for apiserver health ...
	I0818 20:13:40.961273   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:40.961298   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:40.961350   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:41.012093   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.012113   73711 cri.go:89] found id: ""
	I0818 20:13:41.012121   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:41.012172   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.016282   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:41.016337   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:41.063834   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.063861   73711 cri.go:89] found id: ""
	I0818 20:13:41.063871   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:41.063930   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.068645   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:41.068724   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:41.117544   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:41.117565   73711 cri.go:89] found id: ""
	I0818 20:13:41.117573   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:41.117626   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.121916   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:41.121985   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:41.161641   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:41.161660   73711 cri.go:89] found id: ""
	I0818 20:13:41.161667   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:41.161720   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.165727   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:41.165778   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:41.207519   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:41.207544   73711 cri.go:89] found id: ""
	I0818 20:13:41.207554   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:41.207615   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.212114   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:41.212171   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:41.255480   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:41.255501   73711 cri.go:89] found id: ""
	I0818 20:13:41.255508   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:41.255560   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.259585   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:41.259635   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:41.312099   73711 cri.go:89] found id: ""
	I0818 20:13:41.312124   73711 logs.go:276] 0 containers: []
	W0818 20:13:41.312131   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:41.312137   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:41.312201   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:41.358622   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:41.358647   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.358653   73711 cri.go:89] found id: ""
	I0818 20:13:41.358662   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:41.358723   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.363210   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.367271   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:41.367294   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.406329   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:41.406355   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:41.768140   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:41.768175   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:41.811010   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:41.811035   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:41.886206   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:41.886240   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.938249   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:41.938284   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.977289   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:41.977317   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:42.018606   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:42.018630   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:42.055557   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:42.055581   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:42.070467   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:42.070494   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:42.182068   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:42.182100   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:42.219346   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:42.219373   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:42.262193   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:42.262221   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:44.839152   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:13:44.839181   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.839186   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.839191   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.839194   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.839197   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.839200   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.839206   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.839212   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.839218   73711 system_pods.go:74] duration metric: took 3.877940537s to wait for pod list to return data ...
	I0818 20:13:44.839225   73711 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:44.841877   73711 default_sa.go:45] found service account: "default"
	I0818 20:13:44.841896   73711 default_sa.go:55] duration metric: took 2.662355ms for default service account to be created ...
	I0818 20:13:44.841904   73711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:44.846214   73711 system_pods.go:86] 8 kube-system pods found
	I0818 20:13:44.846240   73711 system_pods.go:89] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.846247   73711 system_pods.go:89] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.846252   73711 system_pods.go:89] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.846259   73711 system_pods.go:89] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.846264   73711 system_pods.go:89] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.846269   73711 system_pods.go:89] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.846279   73711 system_pods.go:89] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.846286   73711 system_pods.go:89] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.846296   73711 system_pods.go:126] duration metric: took 4.386348ms to wait for k8s-apps to be running ...
	I0818 20:13:44.846305   73711 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:44.846356   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:44.863225   73711 system_svc.go:56] duration metric: took 16.912117ms WaitForService to wait for kubelet
	I0818 20:13:44.863262   73711 kubeadm.go:582] duration metric: took 4m26.748456958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:44.863287   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:44.866049   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:44.866069   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:44.866082   73711 node_conditions.go:105] duration metric: took 2.789471ms to run NodePressure ...
	I0818 20:13:44.866095   73711 start.go:241] waiting for startup goroutines ...
	I0818 20:13:44.866103   73711 start.go:246] waiting for cluster config update ...
	I0818 20:13:44.866135   73711 start.go:255] writing updated cluster config ...
	I0818 20:13:44.866415   73711 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:44.914902   73711 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:44.916929   73711 out.go:177] * Done! kubectl is now configured to use "no-preload-944426" cluster and "default" namespace by default
	I0818 20:13:45.226521   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:47.226773   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:48.227026   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.227050   74485 pod_ready.go:82] duration metric: took 7.506560684s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.227061   74485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231313   74485 pod_ready.go:93] pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.231336   74485 pod_ready.go:82] duration metric: took 4.268255ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231345   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235228   74485 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.235249   74485 pod_ready.go:82] duration metric: took 3.897729ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235259   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238872   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.238889   74485 pod_ready.go:82] duration metric: took 3.623044ms for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238897   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243264   74485 pod_ready.go:93] pod "kube-proxy-hmvsl" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.243282   74485 pod_ready.go:82] duration metric: took 4.378808ms for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243292   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625076   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.625101   74485 pod_ready.go:82] duration metric: took 381.800619ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625111   74485 pod_ready.go:39] duration metric: took 11.92213071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:48.625128   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:48.625193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:48.640038   74485 api_server.go:72] duration metric: took 12.253809178s to wait for apiserver process to appear ...
	I0818 20:13:48.640061   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:48.640081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:13:48.644433   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:13:48.645289   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:48.645306   74485 api_server.go:131] duration metric: took 5.239358ms to wait for apiserver health ...
	I0818 20:13:48.645313   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:48.829655   74485 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:48.829698   74485 system_pods.go:61] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:48.829706   74485 system_pods.go:61] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:48.829718   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:48.829726   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:48.829731   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:48.829737   74485 system_pods.go:61] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:48.829742   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:48.829754   74485 system_pods.go:61] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:48.829760   74485 system_pods.go:61] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:48.829770   74485 system_pods.go:74] duration metric: took 184.451133ms to wait for pod list to return data ...
	I0818 20:13:48.829783   74485 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:49.023954   74485 default_sa.go:45] found service account: "default"
	I0818 20:13:49.023982   74485 default_sa.go:55] duration metric: took 194.191689ms for default service account to be created ...
	I0818 20:13:49.023992   74485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:49.227864   74485 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:49.227892   74485 system_pods.go:89] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:49.227898   74485 system_pods.go:89] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:49.227902   74485 system_pods.go:89] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:49.227907   74485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:49.227911   74485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:49.227915   74485 system_pods.go:89] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:49.227918   74485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:49.227925   74485 system_pods.go:89] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:49.227930   74485 system_pods.go:89] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:49.227936   74485 system_pods.go:126] duration metric: took 203.939768ms to wait for k8s-apps to be running ...
	I0818 20:13:49.227945   74485 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:49.227989   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:49.242762   74485 system_svc.go:56] duration metric: took 14.808746ms WaitForService to wait for kubelet
	I0818 20:13:49.242793   74485 kubeadm.go:582] duration metric: took 12.856565711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:49.242819   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:49.425517   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:49.425543   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:49.425555   74485 node_conditions.go:105] duration metric: took 182.731125ms to run NodePressure ...
	I0818 20:13:49.425569   74485 start.go:241] waiting for startup goroutines ...
	I0818 20:13:49.425577   74485 start.go:246] waiting for cluster config update ...
	I0818 20:13:49.425588   74485 start.go:255] writing updated cluster config ...
	I0818 20:13:49.425898   74485 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:49.473176   74485 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:49.475285   74485 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-852598" cluster and "default" namespace by default
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 
	
	
	==> CRI-O <==
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.400557387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013069400534093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee748cc5-4b71-49a6-8d2a-085ead108f4f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.401180104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82c54275-44af-452e-bfcd-8f1c98ee4642 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.401313940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82c54275-44af-452e-bfcd-8f1c98ee4642 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.401541101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b,PodSandboxId:1a4f5d80cbd6c92b2845d1a2456b75b776122bb6472479dd5bbca8ad4ad29871,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018578787279,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xp4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c416478-c540-4b55-9faa-95927e58d9a0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e,PodSandboxId:02cf34edaa3ed2dc4db9a41aeab7fd13c2acd71e08a972286cf1853df0114c8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018119976501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmjdr,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: b26f1a75-d466-4634-b9da-9505ca282e30,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107,PodSandboxId:6f3a5c04a09f63cfe2b2c842e8cf2396e56ed988071e0109019271b2e4ab54bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1724012017953392809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be7417-303b-4572-b9c9-1bbd594ed3fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af,PodSandboxId:36f5dc44788ca92ee4635f5d916c7376e95c3215beab5a56e1e3aadc89146279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724012016833861032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmvsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a577a1d-1e69-4bc2-ba50-c4922fcf58ae,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32,PodSandboxId:a35dfb1ab9d6dc4581afa05af6c604756ef3e95f0733df1732cf6e7d6e8b5667,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724012006085485148
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14,PodSandboxId:6a605266487369a0e03d701d5ad594a99d2797442bc193d1b41286c9fd35313d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724012006051516989,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9df5d8589a933b23e3dc29868079397,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9,PodSandboxId:07799b23ec11e6c6095a86de0c8a9b00dfab539013c1366c4cc22b7df3dae5c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724012006026915028,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d44b5831594f5f9237e6d36b37c379,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16,PodSandboxId:49f3c28de996dfb91c7d802bdfb4e8b49c11b2e09b3a643cdc48b4f9e90bfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724012005995541718,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b57415e0431c47f1a80aed8fcedb19e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9,PodSandboxId:4bcb9bab94fb35583d02206eaa17f4d02149703b88eccfb0fc8a1ec9921eb038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011719427092592,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82c54275-44af-452e-bfcd-8f1c98ee4642 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.437839311Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f02abf87-0bcb-414c-84ab-41902c967680 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.437929954Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f02abf87-0bcb-414c-84ab-41902c967680 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.439124790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0753870-9b8a-4c60-8ada-afadc47caa79 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.439574654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013069439552978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0753870-9b8a-4c60-8ada-afadc47caa79 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.440325057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b25a2cd7-e602-4cc4-af4e-1815acbcf504 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.440379033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b25a2cd7-e602-4cc4-af4e-1815acbcf504 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.440603812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b,PodSandboxId:1a4f5d80cbd6c92b2845d1a2456b75b776122bb6472479dd5bbca8ad4ad29871,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018578787279,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xp4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c416478-c540-4b55-9faa-95927e58d9a0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e,PodSandboxId:02cf34edaa3ed2dc4db9a41aeab7fd13c2acd71e08a972286cf1853df0114c8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018119976501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmjdr,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: b26f1a75-d466-4634-b9da-9505ca282e30,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107,PodSandboxId:6f3a5c04a09f63cfe2b2c842e8cf2396e56ed988071e0109019271b2e4ab54bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1724012017953392809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be7417-303b-4572-b9c9-1bbd594ed3fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af,PodSandboxId:36f5dc44788ca92ee4635f5d916c7376e95c3215beab5a56e1e3aadc89146279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724012016833861032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmvsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a577a1d-1e69-4bc2-ba50-c4922fcf58ae,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32,PodSandboxId:a35dfb1ab9d6dc4581afa05af6c604756ef3e95f0733df1732cf6e7d6e8b5667,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724012006085485148
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14,PodSandboxId:6a605266487369a0e03d701d5ad594a99d2797442bc193d1b41286c9fd35313d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724012006051516989,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9df5d8589a933b23e3dc29868079397,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9,PodSandboxId:07799b23ec11e6c6095a86de0c8a9b00dfab539013c1366c4cc22b7df3dae5c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724012006026915028,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d44b5831594f5f9237e6d36b37c379,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16,PodSandboxId:49f3c28de996dfb91c7d802bdfb4e8b49c11b2e09b3a643cdc48b4f9e90bfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724012005995541718,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b57415e0431c47f1a80aed8fcedb19e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9,PodSandboxId:4bcb9bab94fb35583d02206eaa17f4d02149703b88eccfb0fc8a1ec9921eb038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011719427092592,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b25a2cd7-e602-4cc4-af4e-1815acbcf504 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.479443632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f4b505d-64d5-4688-b942-786feb752814 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.479525795Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f4b505d-64d5-4688-b942-786feb752814 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.480704671Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c7df392-8eed-4449-b017-f057499b6db5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.481122381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013069481098821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c7df392-8eed-4449-b017-f057499b6db5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.481708307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc7796bf-2f4b-457c-a559-6a70bbb8c445 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.481760466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc7796bf-2f4b-457c-a559-6a70bbb8c445 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.481979011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b,PodSandboxId:1a4f5d80cbd6c92b2845d1a2456b75b776122bb6472479dd5bbca8ad4ad29871,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018578787279,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xp4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c416478-c540-4b55-9faa-95927e58d9a0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e,PodSandboxId:02cf34edaa3ed2dc4db9a41aeab7fd13c2acd71e08a972286cf1853df0114c8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018119976501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmjdr,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: b26f1a75-d466-4634-b9da-9505ca282e30,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107,PodSandboxId:6f3a5c04a09f63cfe2b2c842e8cf2396e56ed988071e0109019271b2e4ab54bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1724012017953392809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be7417-303b-4572-b9c9-1bbd594ed3fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af,PodSandboxId:36f5dc44788ca92ee4635f5d916c7376e95c3215beab5a56e1e3aadc89146279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724012016833861032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmvsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a577a1d-1e69-4bc2-ba50-c4922fcf58ae,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32,PodSandboxId:a35dfb1ab9d6dc4581afa05af6c604756ef3e95f0733df1732cf6e7d6e8b5667,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724012006085485148
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14,PodSandboxId:6a605266487369a0e03d701d5ad594a99d2797442bc193d1b41286c9fd35313d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724012006051516989,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9df5d8589a933b23e3dc29868079397,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9,PodSandboxId:07799b23ec11e6c6095a86de0c8a9b00dfab539013c1366c4cc22b7df3dae5c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724012006026915028,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d44b5831594f5f9237e6d36b37c379,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16,PodSandboxId:49f3c28de996dfb91c7d802bdfb4e8b49c11b2e09b3a643cdc48b4f9e90bfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724012005995541718,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b57415e0431c47f1a80aed8fcedb19e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9,PodSandboxId:4bcb9bab94fb35583d02206eaa17f4d02149703b88eccfb0fc8a1ec9921eb038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011719427092592,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc7796bf-2f4b-457c-a559-6a70bbb8c445 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.513998945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09f3917a-9d67-4ca3-bfe5-1590a13ead73 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.514068222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09f3917a-9d67-4ca3-bfe5-1590a13ead73 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.515575163Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80ec552e-d43e-4896-8548-d90a61effcfb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.515965898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013069515943587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80ec552e-d43e-4896-8548-d90a61effcfb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.517983661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3e94581-25c7-4e77-9b9b-a53656debe24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.518035989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3e94581-25c7-4e77-9b9b-a53656debe24 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:31:09 default-k8s-diff-port-852598 crio[733]: time="2024-08-18 20:31:09.518295416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b,PodSandboxId:1a4f5d80cbd6c92b2845d1a2456b75b776122bb6472479dd5bbca8ad4ad29871,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018578787279,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-xp4z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c416478-c540-4b55-9faa-95927e58d9a0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e,PodSandboxId:02cf34edaa3ed2dc4db9a41aeab7fd13c2acd71e08a972286cf1853df0114c8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724012018119976501,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fmjdr,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: b26f1a75-d466-4634-b9da-9505ca282e30,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107,PodSandboxId:6f3a5c04a09f63cfe2b2c842e8cf2396e56ed988071e0109019271b2e4ab54bd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1724012017953392809,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82be7417-303b-4572-b9c9-1bbd594ed3fe,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af,PodSandboxId:36f5dc44788ca92ee4635f5d916c7376e95c3215beab5a56e1e3aadc89146279,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING
,CreatedAt:1724012016833861032,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hmvsl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a577a1d-1e69-4bc2-ba50-c4922fcf58ae,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32,PodSandboxId:a35dfb1ab9d6dc4581afa05af6c604756ef3e95f0733df1732cf6e7d6e8b5667,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724012006085485148
,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14,PodSandboxId:6a605266487369a0e03d701d5ad594a99d2797442bc193d1b41286c9fd35313d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724012006051516989,Labels:map[
string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9df5d8589a933b23e3dc29868079397,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9,PodSandboxId:07799b23ec11e6c6095a86de0c8a9b00dfab539013c1366c4cc22b7df3dae5c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724012006026915028,Labels:map[string]string{io.kuber
netes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71d44b5831594f5f9237e6d36b37c379,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16,PodSandboxId:49f3c28de996dfb91c7d802bdfb4e8b49c11b2e09b3a643cdc48b4f9e90bfbe7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724012005995541718,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b57415e0431c47f1a80aed8fcedb19e,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9,PodSandboxId:4bcb9bab94fb35583d02206eaa17f4d02149703b88eccfb0fc8a1ec9921eb038,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724011719427092592,Labels:map[
string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-852598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08147e504744ad4e1b58b0b80c63c3fa,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3e94581-25c7-4e77-9b9b-a53656debe24 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5e4b06ab3798d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   1a4f5d80cbd6c       coredns-6f6b679f8f-xp4z4
	15a34037a3e77       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   17 minutes ago      Running             coredns                   0                   02cf34edaa3ed       coredns-6f6b679f8f-fmjdr
	c375804891e54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   6f3a5c04a09f6       storage-provisioner
	d33a89ff30c15       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   17 minutes ago      Running             kube-proxy                0                   36f5dc44788ca       kube-proxy-hmvsl
	f00cba1f2a869       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   17 minutes ago      Running             kube-apiserver            2                   a35dfb1ab9d6d       kube-apiserver-default-k8s-diff-port-852598
	9cea141aef89f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   6a60526648736       etcd-default-k8s-diff-port-852598
	bd373d02f1c94       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   17 minutes ago      Running             kube-scheduler            2                   07799b23ec11e       kube-scheduler-default-k8s-diff-port-852598
	89d240f106d99       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   17 minutes ago      Running             kube-controller-manager   2                   49f3c28de996d       kube-controller-manager-default-k8s-diff-port-852598
	1bf63663e04d7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   22 minutes ago      Exited              kube-apiserver            1                   4bcb9bab94fb3       kube-apiserver-default-k8s-diff-port-852598
	
	
	==> coredns [15a34037a3e77ff85ea221e87ff322549f6ed32d9920fde7411a542feb618b0e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [5e4b06ab3798dc8b93771e5c92af7738e93a5488bc1c0317c4269579f46fe30b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-852598
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-852598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=default-k8s-diff-port-852598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 20:13:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-852598
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 20:31:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 20:29:00 +0000   Sun, 18 Aug 2024 20:13:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 20:29:00 +0000   Sun, 18 Aug 2024 20:13:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 20:29:00 +0000   Sun, 18 Aug 2024 20:13:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 20:29:00 +0000   Sun, 18 Aug 2024 20:13:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.111
	  Hostname:    default-k8s-diff-port-852598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a56486080d8241f3b3642b1785624cd5
	  System UUID:                a5648608-0d82-41f3-b364-2b1785624cd5
	  Boot ID:                    b64df251-4eae-4244-b6eb-04579e33de99
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-fmjdr                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-6f6b679f8f-xp4z4                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-852598                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-852598             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-852598    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-hmvsl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-852598             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-gjnsb                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node default-k8s-diff-port-852598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node default-k8s-diff-port-852598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node default-k8s-diff-port-852598 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node default-k8s-diff-port-852598 event: Registered Node default-k8s-diff-port-852598 in Controller
	
	
	==> dmesg <==
	[  +0.039776] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.006050] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.512283] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.614452] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.501029] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.064609] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070091] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.208971] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.119503] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.323998] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.439679] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +0.066350] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.012215] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +4.624845] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.320396] kauditd_printk_skb: 54 callbacks suppressed
	[Aug18 20:09] kauditd_printk_skb: 31 callbacks suppressed
	[Aug18 20:13] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.233019] systemd-fstab-generator[2596]: Ignoring "noauto" option for root device
	[  +4.462481] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.593507] systemd-fstab-generator[2919]: Ignoring "noauto" option for root device
	[  +5.437000] systemd-fstab-generator[3047]: Ignoring "noauto" option for root device
	[  +0.109175] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.427378] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [9cea141aef89f130f02f0f74eb7cd1c220580ef47ef2a92202f21901a3d7bb14] <==
	{"level":"info","ts":"2024-08-18T20:13:27.201813Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-18T20:13:27.201862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a received MsgPreVoteResp from d9925a5c077e2b1a at term 1"}
	{"level":"info","ts":"2024-08-18T20:13:27.201879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a became candidate at term 2"}
	{"level":"info","ts":"2024-08-18T20:13:27.201904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a received MsgVoteResp from d9925a5c077e2b1a at term 2"}
	{"level":"info","ts":"2024-08-18T20:13:27.201917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d9925a5c077e2b1a became leader at term 2"}
	{"level":"info","ts":"2024-08-18T20:13:27.201924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d9925a5c077e2b1a elected leader d9925a5c077e2b1a at term 2"}
	{"level":"info","ts":"2024-08-18T20:13:27.203370Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"d9925a5c077e2b1a","local-member-attributes":"{Name:default-k8s-diff-port-852598 ClientURLs:[https://192.168.72.111:2379]}","request-path":"/0/members/d9925a5c077e2b1a/attributes","cluster-id":"5b15f244ed8f8770","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T20:13:27.203444Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:13:27.203532Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T20:13:27.203975Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T20:13:27.206284Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T20:13:27.206497Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:13:27.207048Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:13:27.209925Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T20:13:27.208367Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T20:13:27.210895Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.111:2379"}
	{"level":"info","ts":"2024-08-18T20:13:27.208408Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5b15f244ed8f8770","local-member-id":"d9925a5c077e2b1a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:13:27.224532Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:13:27.224576Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T20:23:27.250220Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":689}
	{"level":"info","ts":"2024-08-18T20:23:27.259805Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":689,"took":"9.176004ms","hash":3624675720,"current-db-size-bytes":2310144,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2310144,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-08-18T20:23:27.259891Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3624675720,"revision":689,"compact-revision":-1}
	{"level":"info","ts":"2024-08-18T20:28:27.258135Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":932}
	{"level":"info","ts":"2024-08-18T20:28:27.263015Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":932,"took":"4.07131ms","hash":2453472415,"current-db-size-bytes":2310144,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-18T20:28:27.263106Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2453472415,"revision":932,"compact-revision":689}
	
	
	==> kernel <==
	 20:31:09 up 22 min,  0 users,  load average: 0.62, 0.23, 0.16
	Linux default-k8s-diff-port-852598 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1bf63663e04d73c1b10e423539de35e54e2a2cb4634d4f3af5192aaa2f3d18a9] <==
	W0818 20:13:19.202878       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.209492       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.331347       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.411149       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.426741       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.468865       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.469212       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.481980       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.500751       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.545810       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.597802       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.612641       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.618161       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.631939       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.655679       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.666148       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.715375       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.765328       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.793969       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.864184       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:19.941552       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:20.030318       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:20.042823       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:20.174856       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0818 20:13:20.291613       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f00cba1f2a86900739f735fc706a032e8ef0bfea994e8ed4b8a986ab974dae32] <==
	I0818 20:26:29.588554       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:26:29.588625       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:28:28.586472       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:28:28.586632       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0818 20:28:29.589041       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:28:29.589109       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0818 20:28:29.589148       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:28:29.589213       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0818 20:28:29.590377       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:28:29.590445       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0818 20:29:29.590842       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:29:29.590920       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0818 20:29:29.590979       1 handler_proxy.go:99] no RequestInfo found in the context
	E0818 20:29:29.591004       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0818 20:29:29.592137       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 20:29:29.592180       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [89d240f106d99d09d858b2122ed248d31e3c24a7a6daaba582a72a613a040d16] <==
	E0818 20:26:05.690699       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:26:06.177707       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:26:35.698520       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:26:36.185131       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:27:05.707408       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:27:06.193664       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:27:35.716381       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:27:36.202301       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:28:05.723729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:28:06.210451       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:28:35.732071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:28:36.218125       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:29:00.712835       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-852598"
	E0818 20:29:05.739413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:29:06.225788       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:29:35.746636       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:29:36.233872       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0818 20:29:46.302592       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="251.732µs"
	I0818 20:29:58.299306       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="102.669µs"
	E0818 20:30:05.753111       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:30:06.241030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:30:35.759277       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:30:36.248845       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0818 20:31:05.766100       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0818 20:31:06.258035       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [d33a89ff30c1573aef7ff595b81b01c693ef1d1f1309e89b2ca70f699650a8af] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 20:13:37.321386       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 20:13:37.331837       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.111"]
	E0818 20:13:37.331908       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 20:13:37.469694       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 20:13:37.473177       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 20:13:37.477486       1 server_linux.go:169] "Using iptables Proxier"
	I0818 20:13:37.500427       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 20:13:37.500725       1 server.go:483] "Version info" version="v1.31.0"
	I0818 20:13:37.500742       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 20:13:37.504650       1 config.go:197] "Starting service config controller"
	I0818 20:13:37.504681       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 20:13:37.504719       1 config.go:104] "Starting endpoint slice config controller"
	I0818 20:13:37.504725       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 20:13:37.506597       1 config.go:326] "Starting node config controller"
	I0818 20:13:37.506649       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 20:13:37.605931       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 20:13:37.605984       1 shared_informer.go:320] Caches are synced for service config
	I0818 20:13:37.606706       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bd373d02f1c944335c9f80c3ef80e1c8d2a0a8921d17b9d8d7850d50f747c4d9] <==
	W0818 20:13:28.589851       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 20:13:28.590620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.590807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 20:13:28.590907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.590981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 20:13:28.591015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.591048       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 20:13:28.591074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.591358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 20:13:28.591507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:28.592351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 20:13:28.592413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.605411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 20:13:29.605507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.618399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 20:13:29.618519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.706408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 20:13:29.706558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.774008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 20:13:29.774314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:29.818753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 20:13:29.818869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0818 20:13:30.002168       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 20:13:30.002325       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0818 20:13:33.080628       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 20:29:58 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:29:58.283427    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:30:01 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:01.589407    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013001588919696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:01 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:01.589870    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013001588919696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:11 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:11.591724    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013011591192540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:11 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:11.592026    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013011591192540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:13 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:13.282869    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:30:21 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:21.593817    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013021593328984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:21 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:21.594278    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013021593328984,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:26 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:26.282865    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:30:31 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:31.313746    2926 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 20:30:31 default-k8s-diff-port-852598 kubelet[2926]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 20:30:31 default-k8s-diff-port-852598 kubelet[2926]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 20:30:31 default-k8s-diff-port-852598 kubelet[2926]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 20:30:31 default-k8s-diff-port-852598 kubelet[2926]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 20:30:31 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:31.596618    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013031596136269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:31 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:31.596647    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013031596136269,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:40 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:40.283962    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:30:41 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:41.598700    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013041597990723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:41 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:41.598728    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013041597990723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:51 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:51.600509    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013051600147038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:51 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:51.600792    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013051600147038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:30:52 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:30:52.283767    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	Aug 18 20:31:01 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:31:01.602701    2926 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013061602337406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:31:01 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:31:01.602749    2926 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724013061602337406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 18 20:31:05 default-k8s-diff-port-852598 kubelet[2926]: E0818 20:31:05.285105    2926 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-gjnsb" podUID="6565c023-a1ba-422e-9e9a-b601dd0419d0"
	
	
	==> storage-provisioner [c375804891e545c4f25a35540f91b8690412dbb3eb16e5b710332ff5ce867107] <==
	I0818 20:13:38.188875       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 20:13:38.258212       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 20:13:38.260296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 20:13:38.327992       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 20:13:38.348011       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-852598_f9834ca9-c64d-4ce4-84fb-08d408f4c7f0!
	I0818 20:13:38.334802       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b3de4d9f-7c90-4fd3-87cc-c8403f11a438", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-852598_f9834ca9-c64d-4ce4-84fb-08d408f4c7f0 became leader
	I0818 20:13:38.454353       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-852598_f9834ca9-c64d-4ce4-84fb-08d408f4c7f0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-852598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-gjnsb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-852598 describe pod metrics-server-6867b74b74-gjnsb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-852598 describe pod metrics-server-6867b74b74-gjnsb: exit status 1 (98.138805ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-gjnsb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-852598 describe pod metrics-server-6867b74b74-gjnsb: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (497.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (155.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:25:53.949511   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:26:44.018714   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:26:46.286087   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:27:29.643869   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
E0818 20:27:51.701720   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.105:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.105:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 2 (228.699891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-247539" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-247539 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-247539 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.841µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-247539 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 2 (225.583218ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-247539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-247539 logs -n 25: (1.655695626s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-944426             | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-868662                  | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-868662 --memory=2200 --alsologtostderr   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:01 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-291295            | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC | 18 Aug 24 20:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| image   | newest-cni-868662 image list                           | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| delete  | -p newest-cni-868662                                   | newest-cni-868662            | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:01 UTC |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:01 UTC | 18 Aug 24 20:02 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-852598  | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-247539        | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-944426                  | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-291295                 | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-944426                                   | no-preload-944426            | jenkins | v1.33.1 | 18 Aug 24 20:02 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-291295                                  | embed-certs-291295           | jenkins | v1.33.1 | 18 Aug 24 20:03 UTC | 18 Aug 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-852598       | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-247539             | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-247539                              | old-k8s-version-247539       | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-852598 | jenkins | v1.33.1 | 18 Aug 24 20:04 UTC | 18 Aug 24 20:13 UTC |
	|         | default-k8s-diff-port-852598                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 20:04:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 20:04:42.787579   74485 out.go:345] Setting OutFile to fd 1 ...
	I0818 20:04:42.787666   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787673   74485 out.go:358] Setting ErrFile to fd 2...
	I0818 20:04:42.787677   74485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 20:04:42.787847   74485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 20:04:42.788352   74485 out.go:352] Setting JSON to false
	I0818 20:04:42.789201   74485 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6427,"bootTime":1724005056,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 20:04:42.789257   74485 start.go:139] virtualization: kvm guest
	I0818 20:04:42.791538   74485 out.go:177] * [default-k8s-diff-port-852598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 20:04:42.793185   74485 notify.go:220] Checking for updates...
	I0818 20:04:42.793204   74485 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 20:04:42.794555   74485 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 20:04:42.795955   74485 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:04:42.797158   74485 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 20:04:42.798459   74485 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 20:04:42.799775   74485 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 20:04:42.801373   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:04:42.801763   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.801823   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.816564   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45513
	I0818 20:04:42.816964   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.817465   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.817486   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.817807   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.818015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.818224   74485 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 20:04:42.818511   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:04:42.818540   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:04:42.832964   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I0818 20:04:42.833369   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:04:42.833866   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:04:42.833895   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:04:42.834252   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:04:42.834438   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:04:42.867522   74485 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 20:04:42.868931   74485 start.go:297] selected driver: kvm2
	I0818 20:04:42.868948   74485 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.869074   74485 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 20:04:42.869754   74485 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.869835   74485 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 20:04:42.884983   74485 install.go:137] /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2 version is 1.33.1
	I0818 20:04:42.885345   74485 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:04:42.885408   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:04:42.885421   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:04:42.885450   74485 start.go:340] cluster config:
	{Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:04:42.885567   74485 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 20:04:42.887511   74485 out.go:177] * Starting "default-k8s-diff-port-852598" primary control-plane node in "default-k8s-diff-port-852598" cluster
	I0818 20:04:42.011628   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:45.083629   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:42.888803   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:04:42.888828   74485 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 20:04:42.888834   74485 cache.go:56] Caching tarball of preloaded images
	I0818 20:04:42.888903   74485 preload.go:172] Found /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0818 20:04:42.888913   74485 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0818 20:04:42.888991   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:04:42.889163   74485 start.go:360] acquireMachinesLock for default-k8s-diff-port-852598: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:04:51.163614   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:04:54.235770   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:00.315808   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:03.387719   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:09.467686   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:12.539667   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:18.619652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:21.691652   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:27.771635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:30.843627   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:36.923644   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:39.995678   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:46.075611   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:49.147665   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:55.227683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:05:58.299638   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:04.379690   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:07.451735   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:13.531669   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:16.603729   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:22.683639   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:25.755659   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:31.835708   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:34.907693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:40.987635   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:44.059673   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:50.139693   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:53.211683   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:06:59.291707   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:02.363660   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:08.443634   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:11.515633   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:17.595640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:20.667689   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:26.747640   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:29.819663   73711 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.228:22: connect: no route to host
	I0818 20:07:32.823816   73815 start.go:364] duration metric: took 4m30.025550701s to acquireMachinesLock for "embed-certs-291295"
	I0818 20:07:32.823869   73815 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:32.823875   73815 fix.go:54] fixHost starting: 
	I0818 20:07:32.824270   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:32.824306   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:32.839755   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0818 20:07:32.840171   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:32.840614   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:07:32.840632   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:32.840962   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:32.841160   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:32.841303   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:07:32.842786   73815 fix.go:112] recreateIfNeeded on embed-certs-291295: state=Stopped err=<nil>
	I0818 20:07:32.842814   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	W0818 20:07:32.842974   73815 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:32.844743   73815 out.go:177] * Restarting existing kvm2 VM for "embed-certs-291295" ...
	I0818 20:07:32.821304   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:32.821364   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821657   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:07:32.821683   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:07:32.821904   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:07:32.823683   73711 machine.go:96] duration metric: took 4m37.430465042s to provisionDockerMachine
	I0818 20:07:32.823720   73711 fix.go:56] duration metric: took 4m37.451071449s for fixHost
	I0818 20:07:32.823727   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 4m37.451091077s
	W0818 20:07:32.823754   73711 start.go:714] error starting host: provision: host is not running
	W0818 20:07:32.823846   73711 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0818 20:07:32.823855   73711 start.go:729] Will try again in 5 seconds ...
	I0818 20:07:32.846149   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Start
	I0818 20:07:32.846317   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring networks are active...
	I0818 20:07:32.847049   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network default is active
	I0818 20:07:32.847478   73815 main.go:141] libmachine: (embed-certs-291295) Ensuring network mk-embed-certs-291295 is active
	I0818 20:07:32.847854   73815 main.go:141] libmachine: (embed-certs-291295) Getting domain xml...
	I0818 20:07:32.848748   73815 main.go:141] libmachine: (embed-certs-291295) Creating domain...
	I0818 20:07:34.053380   73815 main.go:141] libmachine: (embed-certs-291295) Waiting to get IP...
	I0818 20:07:34.054322   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.054765   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.054850   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.054751   75081 retry.go:31] will retry after 299.809444ms: waiting for machine to come up
	I0818 20:07:34.356537   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.356955   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.357014   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.356932   75081 retry.go:31] will retry after 366.714086ms: waiting for machine to come up
	I0818 20:07:34.725440   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:34.725885   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:34.725915   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:34.725839   75081 retry.go:31] will retry after 427.074526ms: waiting for machine to come up
	I0818 20:07:35.154258   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.154660   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.154682   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.154633   75081 retry.go:31] will retry after 565.117984ms: waiting for machine to come up
	I0818 20:07:35.721302   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:35.721729   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:35.721757   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:35.721686   75081 retry.go:31] will retry after 630.987814ms: waiting for machine to come up
	I0818 20:07:36.354566   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:36.354981   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:36.355016   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:36.354951   75081 retry.go:31] will retry after 697.865559ms: waiting for machine to come up
	I0818 20:07:37.054868   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.055232   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.055260   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.055188   75081 retry.go:31] will retry after 898.995052ms: waiting for machine to come up
	I0818 20:07:37.824187   73711 start.go:360] acquireMachinesLock for no-preload-944426: {Name:mkaa74026b854bae34a47a6811ef5a49f881e9e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 20:07:37.955672   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:37.956089   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:37.956115   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:37.956038   75081 retry.go:31] will retry after 1.482185836s: waiting for machine to come up
	I0818 20:07:39.440488   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:39.440838   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:39.440889   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:39.440794   75081 retry.go:31] will retry after 1.695604547s: waiting for machine to come up
	I0818 20:07:41.138708   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:41.139203   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:41.139231   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:41.139166   75081 retry.go:31] will retry after 1.806916927s: waiting for machine to come up
	I0818 20:07:42.947942   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:42.948344   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:42.948402   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:42.948319   75081 retry.go:31] will retry after 2.664923271s: waiting for machine to come up
	I0818 20:07:45.616102   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:45.616454   73815 main.go:141] libmachine: (embed-certs-291295) DBG | unable to find current IP address of domain embed-certs-291295 in network mk-embed-certs-291295
	I0818 20:07:45.616482   73815 main.go:141] libmachine: (embed-certs-291295) DBG | I0818 20:07:45.616411   75081 retry.go:31] will retry after 3.460207847s: waiting for machine to come up
	I0818 20:07:50.540225   74389 start.go:364] duration metric: took 3m14.505114335s to acquireMachinesLock for "old-k8s-version-247539"
	I0818 20:07:50.540275   74389 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:07:50.540294   74389 fix.go:54] fixHost starting: 
	I0818 20:07:50.540730   74389 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:07:50.540768   74389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:07:50.558479   74389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0818 20:07:50.558950   74389 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:07:50.559499   74389 main.go:141] libmachine: Using API Version  1
	I0818 20:07:50.559526   74389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:07:50.559882   74389 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:07:50.560074   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:07:50.560224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetState
	I0818 20:07:50.561756   74389 fix.go:112] recreateIfNeeded on old-k8s-version-247539: state=Stopped err=<nil>
	I0818 20:07:50.561790   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	W0818 20:07:50.561977   74389 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:07:50.563867   74389 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-247539" ...
	I0818 20:07:50.565173   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .Start
	I0818 20:07:50.565344   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring networks are active...
	I0818 20:07:50.566073   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network default is active
	I0818 20:07:50.566480   74389 main.go:141] libmachine: (old-k8s-version-247539) Ensuring network mk-old-k8s-version-247539 is active
	I0818 20:07:50.566909   74389 main.go:141] libmachine: (old-k8s-version-247539) Getting domain xml...
	I0818 20:07:50.567682   74389 main.go:141] libmachine: (old-k8s-version-247539) Creating domain...
	I0818 20:07:49.078185   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078646   73815 main.go:141] libmachine: (embed-certs-291295) Found IP for machine: 192.168.39.125
	I0818 20:07:49.078676   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has current primary IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.078682   73815 main.go:141] libmachine: (embed-certs-291295) Reserving static IP address...
	I0818 20:07:49.079061   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.079091   73815 main.go:141] libmachine: (embed-certs-291295) Reserved static IP address: 192.168.39.125
	I0818 20:07:49.079112   73815 main.go:141] libmachine: (embed-certs-291295) DBG | skip adding static IP to network mk-embed-certs-291295 - found existing host DHCP lease matching {name: "embed-certs-291295", mac: "52:54:00:b0:4d:ce", ip: "192.168.39.125"}
	I0818 20:07:49.079132   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Getting to WaitForSSH function...
	I0818 20:07:49.079148   73815 main.go:141] libmachine: (embed-certs-291295) Waiting for SSH to be available...
	I0818 20:07:49.081287   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081592   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.081645   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.081761   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH client type: external
	I0818 20:07:49.081788   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa (-rw-------)
	I0818 20:07:49.081823   73815 main.go:141] libmachine: (embed-certs-291295) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.125 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:07:49.081841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | About to run SSH command:
	I0818 20:07:49.081854   73815 main.go:141] libmachine: (embed-certs-291295) DBG | exit 0
	I0818 20:07:49.207649   73815 main.go:141] libmachine: (embed-certs-291295) DBG | SSH cmd err, output: <nil>: 
	I0818 20:07:49.208007   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetConfigRaw
	I0818 20:07:49.208604   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.211088   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211436   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.211464   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.211685   73815 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/config.json ...
	I0818 20:07:49.211906   73815 machine.go:93] provisionDockerMachine start ...
	I0818 20:07:49.211932   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:49.212156   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.214381   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214696   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.214722   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.214838   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.215001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215139   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.215264   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.215402   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.215637   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.215650   73815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:07:49.327972   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:07:49.328001   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328234   73815 buildroot.go:166] provisioning hostname "embed-certs-291295"
	I0818 20:07:49.328286   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.328495   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.331272   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331667   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.331695   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.331795   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.331967   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332124   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.332235   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.332387   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.332602   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.332620   73815 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-291295 && echo "embed-certs-291295" | sudo tee /etc/hostname
	I0818 20:07:49.457656   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-291295
	
	I0818 20:07:49.457692   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.460362   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460692   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.460724   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.460821   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.461040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461269   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.461419   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.461593   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:49.461791   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:49.461807   73815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-291295' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-291295/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-291295' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:07:49.580418   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:07:49.580448   73815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:07:49.580487   73815 buildroot.go:174] setting up certificates
	I0818 20:07:49.580501   73815 provision.go:84] configureAuth start
	I0818 20:07:49.580513   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetMachineName
	I0818 20:07:49.580787   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:49.583435   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.583801   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.583825   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.584097   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.586253   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586572   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.586606   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.586700   73815 provision.go:143] copyHostCerts
	I0818 20:07:49.586764   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:07:49.586786   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:07:49.586863   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:07:49.586984   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:07:49.586994   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:07:49.587034   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:07:49.587134   73815 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:07:49.587144   73815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:07:49.587182   73815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:07:49.587257   73815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.embed-certs-291295 san=[127.0.0.1 192.168.39.125 embed-certs-291295 localhost minikube]
	I0818 20:07:49.844689   73815 provision.go:177] copyRemoteCerts
	I0818 20:07:49.844745   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:07:49.844767   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:49.847172   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:49.847517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:49.847700   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:49.847898   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:49.848060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:49.848210   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:49.933798   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:07:49.957958   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:07:49.981551   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:07:50.004238   73815 provision.go:87] duration metric: took 423.726052ms to configureAuth
	I0818 20:07:50.004263   73815 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:07:50.004431   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:07:50.004494   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.006759   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007031   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.007059   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.007217   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.007437   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007603   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.007729   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.007894   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.008058   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.008072   73815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:07:50.287001   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:07:50.287027   73815 machine.go:96] duration metric: took 1.075103653s to provisionDockerMachine
	I0818 20:07:50.287038   73815 start.go:293] postStartSetup for "embed-certs-291295" (driver="kvm2")
	I0818 20:07:50.287047   73815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:07:50.287067   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.287451   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:07:50.287478   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.290150   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290493   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.290515   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.290727   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.290911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.291096   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.291233   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.379621   73815 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:07:50.388749   73815 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:07:50.388772   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:07:50.388844   73815 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:07:50.388927   73815 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:07:50.389046   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:07:50.398957   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:50.422817   73815 start.go:296] duration metric: took 135.767247ms for postStartSetup
	I0818 20:07:50.422859   73815 fix.go:56] duration metric: took 17.598982329s for fixHost
	I0818 20:07:50.422886   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.425514   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.425899   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.425926   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.426113   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.426332   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426505   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.426623   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.426798   73815 main.go:141] libmachine: Using SSH client type: native
	I0818 20:07:50.427018   73815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.125 22 <nil> <nil>}
	I0818 20:07:50.427033   73815 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:07:50.540087   73815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011670.500173623
	
	I0818 20:07:50.540113   73815 fix.go:216] guest clock: 1724011670.500173623
	I0818 20:07:50.540122   73815 fix.go:229] Guest: 2024-08-18 20:07:50.500173623 +0000 UTC Remote: 2024-08-18 20:07:50.42286401 +0000 UTC m=+287.764343419 (delta=77.309613ms)
	I0818 20:07:50.540140   73815 fix.go:200] guest clock delta is within tolerance: 77.309613ms
	I0818 20:07:50.540145   73815 start.go:83] releasing machines lock for "embed-certs-291295", held for 17.716293127s
	I0818 20:07:50.540172   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.540462   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:50.543280   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543688   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.543721   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.543911   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544386   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544639   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:07:50.544698   73815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:07:50.544749   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.544889   73815 ssh_runner.go:195] Run: cat /version.json
	I0818 20:07:50.544913   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:07:50.547481   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547813   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.547841   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547867   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.547962   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548165   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548281   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:50.548307   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:50.548340   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548431   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:07:50.548515   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.548576   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:07:50.548701   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:07:50.548874   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:07:50.628660   73815 ssh_runner.go:195] Run: systemctl --version
	I0818 20:07:50.653164   73815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:07:50.799158   73815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:07:50.805063   73815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:07:50.805134   73815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:07:50.820796   73815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:07:50.820822   73815 start.go:495] detecting cgroup driver to use...
	I0818 20:07:50.820901   73815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:07:50.837574   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:07:50.851913   73815 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:07:50.851981   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:07:50.865595   73815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:07:50.879240   73815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:07:50.990057   73815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:07:51.151540   73815 docker.go:233] disabling docker service ...
	I0818 20:07:51.151618   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:07:51.166231   73815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:07:51.180949   73815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:07:51.329174   73815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:07:51.460564   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:07:51.474929   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:07:51.494510   73815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:07:51.494573   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.507465   73815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:07:51.507533   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.519207   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.535742   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.551186   73815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:07:51.563233   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.574714   73815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.597948   73815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:07:51.609883   73815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:07:51.621040   73815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:07:51.621115   73815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:07:51.636305   73815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:07:51.646895   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:51.781890   73815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:07:51.927722   73815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:07:51.927799   73815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:07:51.932918   73815 start.go:563] Will wait 60s for crictl version
	I0818 20:07:51.933006   73815 ssh_runner.go:195] Run: which crictl
	I0818 20:07:51.936917   73815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:07:51.981063   73815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:07:51.981141   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.008566   73815 ssh_runner.go:195] Run: crio --version
	I0818 20:07:52.041182   73815 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:07:52.042348   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetIP
	I0818 20:07:52.045196   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045559   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:07:52.045588   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:07:52.045764   73815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 20:07:52.050188   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:52.065105   73815 kubeadm.go:883] updating cluster {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:07:52.065244   73815 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:07:52.065300   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:52.108608   73815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:07:52.108687   73815 ssh_runner.go:195] Run: which lz4
	I0818 20:07:52.112897   73815 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:07:52.117388   73815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:07:52.117421   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:07:51.828826   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting to get IP...
	I0818 20:07:51.829899   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:51.830315   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:51.830377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:51.830297   75210 retry.go:31] will retry after 219.676109ms: waiting for machine to come up
	I0818 20:07:52.051598   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.051926   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.051951   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.051887   75210 retry.go:31] will retry after 340.720644ms: waiting for machine to come up
	I0818 20:07:52.394562   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.395029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.395091   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.395019   75210 retry.go:31] will retry after 407.038872ms: waiting for machine to come up
	I0818 20:07:52.803339   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:52.803853   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:52.803882   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:52.803810   75210 retry.go:31] will retry after 412.505277ms: waiting for machine to come up
	I0818 20:07:53.218483   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.218938   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.218969   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.218907   75210 retry.go:31] will retry after 536.257446ms: waiting for machine to come up
	I0818 20:07:53.756577   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:53.756993   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:53.757021   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:53.756946   75210 retry.go:31] will retry after 887.413182ms: waiting for machine to come up
	I0818 20:07:54.645646   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:54.646117   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:54.646138   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:54.646074   75210 retry.go:31] will retry after 768.662375ms: waiting for machine to come up
	I0818 20:07:55.415911   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:55.416377   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:55.416406   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:55.416341   75210 retry.go:31] will retry after 1.313692426s: waiting for machine to come up
	I0818 20:07:53.532527   73815 crio.go:462] duration metric: took 1.419668609s to copy over tarball
	I0818 20:07:53.532605   73815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:07:55.664780   73815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.132141788s)
	I0818 20:07:55.664810   73815 crio.go:469] duration metric: took 2.132257968s to extract the tarball
	I0818 20:07:55.664820   73815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:07:55.702662   73815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:07:55.745782   73815 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:07:55.745801   73815 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:07:55.745809   73815 kubeadm.go:934] updating node { 192.168.39.125 8443 v1.31.0 crio true true} ...
	I0818 20:07:55.745921   73815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-291295 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.125
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:07:55.745985   73815 ssh_runner.go:195] Run: crio config
	I0818 20:07:55.788458   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:07:55.788484   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:07:55.788503   73815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:07:55.788537   73815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.125 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-291295 NodeName:embed-certs-291295 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.125"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.125 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:07:55.788723   73815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.125
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-291295"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.125
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.125"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:07:55.788800   73815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:07:55.798787   73815 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:07:55.798860   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:07:55.808532   73815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0818 20:07:55.825731   73815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:07:55.842287   73815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0818 20:07:55.860058   73815 ssh_runner.go:195] Run: grep 192.168.39.125	control-plane.minikube.internal$ /etc/hosts
	I0818 20:07:55.864007   73815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.125	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:07:55.876297   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:07:55.999076   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:07:56.015305   73815 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295 for IP: 192.168.39.125
	I0818 20:07:56.015325   73815 certs.go:194] generating shared ca certs ...
	I0818 20:07:56.015339   73815 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:07:56.015505   73815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:07:56.015548   73815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:07:56.015557   73815 certs.go:256] generating profile certs ...
	I0818 20:07:56.015633   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/client.key
	I0818 20:07:56.015689   73815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key.a8bddcfe
	I0818 20:07:56.015732   73815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key
	I0818 20:07:56.015846   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:07:56.015885   73815 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:07:56.015898   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:07:56.015953   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:07:56.015979   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:07:56.015999   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:07:56.016036   73815 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:07:56.016660   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:07:56.044323   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:07:56.079231   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:07:56.111738   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:07:56.134817   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0818 20:07:56.160819   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:07:56.185806   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:07:56.210116   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/embed-certs-291295/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:07:56.234185   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:07:56.256896   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:07:56.279505   73815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:07:56.302178   73815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:07:56.318931   73815 ssh_runner.go:195] Run: openssl version
	I0818 20:07:56.324865   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:07:56.336272   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340825   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.340872   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:07:56.346515   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:07:56.357471   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:07:56.368211   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372600   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.372662   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:07:56.378152   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:07:56.388868   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:07:56.399297   73815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403628   73815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.403663   73815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:07:56.409041   73815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:07:56.419342   73815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:07:56.423757   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:07:56.429341   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:07:56.435012   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:07:56.440752   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:07:56.446305   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:07:56.452219   73815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:07:56.458004   73815 kubeadm.go:392] StartCluster: {Name:embed-certs-291295 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-291295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:07:56.458133   73815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:07:56.458181   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.495200   73815 cri.go:89] found id: ""
	I0818 20:07:56.495281   73815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:07:56.505834   73815 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:07:56.505854   73815 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:07:56.505903   73815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:07:56.516025   73815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:07:56.516962   73815 kubeconfig.go:125] found "embed-certs-291295" server: "https://192.168.39.125:8443"
	I0818 20:07:56.518789   73815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:07:56.528513   73815 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.125
	I0818 20:07:56.528541   73815 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:07:56.528556   73815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:07:56.528612   73815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:07:56.568091   73815 cri.go:89] found id: ""
	I0818 20:07:56.568161   73815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:07:56.584012   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:07:56.593697   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:07:56.593712   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:07:56.593746   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:07:56.603071   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:07:56.603112   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:07:56.612422   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:07:56.621194   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:07:56.621243   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:07:56.630252   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.640086   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:07:56.640138   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:07:56.649323   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:07:56.658055   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:07:56.658110   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:07:56.667134   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:07:56.676460   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.783806   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.515850   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:56.731538   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:56.731959   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:56.731990   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:56.731916   75210 retry.go:31] will retry after 1.411841207s: waiting for machine to come up
	I0818 20:07:58.145416   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:07:58.145849   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:07:58.145875   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:07:58.145805   75210 retry.go:31] will retry after 2.268716529s: waiting for machine to come up
	I0818 20:08:00.417365   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:00.417890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:00.417919   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:00.417851   75210 retry.go:31] will retry after 2.0623739s: waiting for machine to come up
	I0818 20:07:57.710065   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.780213   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:07:57.854365   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:07:57.854458   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.355246   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:58.854602   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.355211   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:07:59.854991   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.354593   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:00.368818   73815 api_server.go:72] duration metric: took 2.514473789s to wait for apiserver process to appear ...
	I0818 20:08:00.368844   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:00.368866   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.832413   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:02.832449   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:02.832466   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.924768   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.924804   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:02.924820   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:02.929839   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:02.929869   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.369350   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.373766   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.373796   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:03.869333   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:03.874889   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:03.874919   73815 api_server.go:103] status: https://192.168.39.125:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:04.369187   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:08:04.374739   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:08:04.383736   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:04.383764   73815 api_server.go:131] duration metric: took 4.014913233s to wait for apiserver health ...
	I0818 20:08:04.383773   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:08:04.383779   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:04.385486   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:02.482610   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:02.483029   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:02.483055   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:02.482978   75210 retry.go:31] will retry after 2.603573897s: waiting for machine to come up
	I0818 20:08:05.089691   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:05.090150   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | unable to find current IP address of domain old-k8s-version-247539 in network mk-old-k8s-version-247539
	I0818 20:08:05.090295   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | I0818 20:08:05.090095   75210 retry.go:31] will retry after 4.362318817s: waiting for machine to come up
	I0818 20:08:04.386800   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:04.403476   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:04.422354   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:04.435181   73815 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:04.435222   73815 system_pods.go:61] "coredns-6f6b679f8f-wvd9k" [02369649-1565-437d-8b19-a67adfe13d45] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:04.435237   73815 system_pods.go:61] "etcd-embed-certs-291295" [1e9f0b7d-bb65-4867-821e-b9af34338b3e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:04.435246   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [bb884a00-e058-4348-bc6a-427c64f4c68d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:04.435261   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [3a359998-cdb6-46ef-a018-e03e70cb33e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:04.435269   73815 system_pods.go:61] "kube-proxy-5fjm2" [bb15b1d9-8221-473a-b0c7-8c65b3b18bf7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 20:08:04.435276   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [4ed7725a-b0e6-4bc0-b0bd-913eb15fd4bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:04.435287   73815 system_pods.go:61] "metrics-server-6867b74b74-g2kt7" [c23cc238-51f0-402c-a0c1-4aecc020d845] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:04.435294   73815 system_pods.go:61] "storage-provisioner" [2dcad3a1-15f0-41b9-8398-5a6e2d8763b6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 20:08:04.435303   73815 system_pods.go:74] duration metric: took 12.928394ms to wait for pod list to return data ...
	I0818 20:08:04.435314   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:04.439127   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:04.439150   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:04.439161   73815 node_conditions.go:105] duration metric: took 3.84281ms to run NodePressure ...
	I0818 20:08:04.439176   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:04.720705   73815 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726814   73815 kubeadm.go:739] kubelet initialised
	I0818 20:08:04.726835   73815 kubeadm.go:740] duration metric: took 6.104356ms waiting for restarted kubelet to initialise ...
	I0818 20:08:04.726843   73815 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:04.736000   73815 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.741473   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741509   73815 pod_ready.go:82] duration metric: took 5.472852ms for pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.741523   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "coredns-6f6b679f8f-wvd9k" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.741534   73815 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.749841   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749872   73815 pod_ready.go:82] duration metric: took 8.326743ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.749883   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "etcd-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.749891   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.756947   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.756997   73815 pod_ready.go:82] duration metric: took 7.079861ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.757011   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.757019   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:04.825829   73815 pod_ready.go:98] node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825865   73815 pod_ready.go:82] duration metric: took 68.834734ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:04.825878   73815 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-291295" hosting pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-291295" has status "Ready":"False"
	I0818 20:08:04.825888   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225761   73815 pod_ready.go:93] pod "kube-proxy-5fjm2" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:05.225786   73815 pod_ready.go:82] duration metric: took 399.888138ms for pod "kube-proxy-5fjm2" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:05.225796   73815 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:07.232250   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.744305   74485 start.go:364] duration metric: took 3m27.85511004s to acquireMachinesLock for "default-k8s-diff-port-852598"
	I0818 20:08:10.744365   74485 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:10.744384   74485 fix.go:54] fixHost starting: 
	I0818 20:08:10.744751   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:10.744791   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:10.764317   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I0818 20:08:10.764799   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:10.765323   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:08:10.765349   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:10.765723   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:10.765929   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:10.766110   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:08:10.767735   74485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-852598: state=Stopped err=<nil>
	I0818 20:08:10.767763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	W0818 20:08:10.767931   74485 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:10.770197   74485 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-852598" ...
	I0818 20:08:09.457009   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457480   74389 main.go:141] libmachine: (old-k8s-version-247539) Found IP for machine: 192.168.50.105
	I0818 20:08:09.457504   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has current primary IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.457510   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserving static IP address...
	I0818 20:08:09.457857   74389 main.go:141] libmachine: (old-k8s-version-247539) Reserved static IP address: 192.168.50.105
	I0818 20:08:09.457890   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.457906   74389 main.go:141] libmachine: (old-k8s-version-247539) Waiting for SSH to be available...
	I0818 20:08:09.457954   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | skip adding static IP to network mk-old-k8s-version-247539 - found existing host DHCP lease matching {name: "old-k8s-version-247539", mac: "52:54:00:5a:f6:41", ip: "192.168.50.105"}
	I0818 20:08:09.457980   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Getting to WaitForSSH function...
	I0818 20:08:09.459881   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460216   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.460247   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.460335   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH client type: external
	I0818 20:08:09.460362   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa (-rw-------)
	I0818 20:08:09.460392   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:09.460408   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | About to run SSH command:
	I0818 20:08:09.460423   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | exit 0
	I0818 20:08:09.587475   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:09.587919   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetConfigRaw
	I0818 20:08:09.588655   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.591521   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.591895   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.591930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.592184   74389 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/config.json ...
	I0818 20:08:09.592383   74389 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:09.592402   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:09.592619   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.595096   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595499   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.595537   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.595665   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.595845   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596011   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.596111   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.596286   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.596468   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.596481   74389 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:09.707554   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:09.707586   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707816   74389 buildroot.go:166] provisioning hostname "old-k8s-version-247539"
	I0818 20:08:09.707839   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.707996   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.710689   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.710998   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.711023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.711174   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.711335   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711506   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.711653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.711794   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.711953   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.711965   74389 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-247539 && echo "old-k8s-version-247539" | sudo tee /etc/hostname
	I0818 20:08:09.841700   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-247539
	
	I0818 20:08:09.841733   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.844811   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.845219   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.845414   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:09.845648   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845815   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:09.845975   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:09.846114   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:09.846289   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:09.846307   74389 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-247539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-247539/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-247539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:09.968115   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:09.968148   74389 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:09.968182   74389 buildroot.go:174] setting up certificates
	I0818 20:08:09.968201   74389 provision.go:84] configureAuth start
	I0818 20:08:09.968211   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetMachineName
	I0818 20:08:09.968477   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:09.971245   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971609   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.971649   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.971836   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:09.974262   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974631   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:09.974662   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:09.974773   74389 provision.go:143] copyHostCerts
	I0818 20:08:09.974836   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:09.974856   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:09.974927   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:09.975051   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:09.975062   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:09.975096   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:09.975177   74389 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:09.975187   74389 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:09.975224   74389 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:09.975294   74389 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-247539 san=[127.0.0.1 192.168.50.105 localhost minikube old-k8s-version-247539]
	I0818 20:08:10.049896   74389 provision.go:177] copyRemoteCerts
	I0818 20:08:10.049989   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:10.050026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.052644   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.052968   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.053023   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.053215   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.053426   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.053581   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.053716   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.141995   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:10.166600   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 20:08:10.190836   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:10.214683   74389 provision.go:87] duration metric: took 246.47172ms to configureAuth
	I0818 20:08:10.214710   74389 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:10.214905   74389 config.go:182] Loaded profile config "old-k8s-version-247539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0818 20:08:10.214993   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.217707   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218072   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.218103   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.218274   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.218459   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218626   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.218774   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.218933   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.219096   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.219111   74389 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:10.494182   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:10.494210   74389 machine.go:96] duration metric: took 901.814539ms to provisionDockerMachine
	I0818 20:08:10.494224   74389 start.go:293] postStartSetup for "old-k8s-version-247539" (driver="kvm2")
	I0818 20:08:10.494236   74389 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:10.494273   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.494702   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:10.494735   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.497498   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.497900   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.497924   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.498148   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.498393   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.498600   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.498790   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.586021   74389 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:10.590105   74389 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:10.590127   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:10.590196   74389 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:10.590297   74389 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:10.590441   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:10.599904   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:10.623173   74389 start.go:296] duration metric: took 128.936199ms for postStartSetup
	I0818 20:08:10.623209   74389 fix.go:56] duration metric: took 20.082924466s for fixHost
	I0818 20:08:10.623227   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.625930   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626261   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.626292   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.626458   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.626671   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626833   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.626979   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.627138   74389 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:10.627301   74389 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.105 22 <nil> <nil>}
	I0818 20:08:10.627312   74389 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:10.744140   74389 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011690.717307394
	
	I0818 20:08:10.744167   74389 fix.go:216] guest clock: 1724011690.717307394
	I0818 20:08:10.744180   74389 fix.go:229] Guest: 2024-08-18 20:08:10.717307394 +0000 UTC Remote: 2024-08-18 20:08:10.623212963 +0000 UTC m=+214.726112365 (delta=94.094431ms)
	I0818 20:08:10.744215   74389 fix.go:200] guest clock delta is within tolerance: 94.094431ms
	I0818 20:08:10.744219   74389 start.go:83] releasing machines lock for "old-k8s-version-247539", held for 20.203967279s
	I0818 20:08:10.744256   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.744534   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:10.747202   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.747764   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.747798   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.748026   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748636   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748835   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .DriverName
	I0818 20:08:10.748919   74389 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:10.748966   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.749272   74389 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:10.749295   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHHostname
	I0818 20:08:10.752016   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753077   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753126   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753184   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753338   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753516   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.753653   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.753688   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:10.753723   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:10.753858   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHPort
	I0818 20:08:10.753871   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.754224   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHKeyPath
	I0818 20:08:10.754357   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetSSHUsername
	I0818 20:08:10.754520   74389 sshutil.go:53] new ssh client: &{IP:192.168.50.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/old-k8s-version-247539/id_rsa Username:docker}
	I0818 20:08:10.841788   74389 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:10.864819   74389 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:11.013008   74389 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:11.019482   74389 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:11.019553   74389 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:11.037309   74389 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:11.037336   74389 start.go:495] detecting cgroup driver to use...
	I0818 20:08:11.037401   74389 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:11.056917   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:11.071658   74389 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:11.071723   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:11.090677   74389 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:11.107084   74389 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:11.248982   74389 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:11.400240   74389 docker.go:233] disabling docker service ...
	I0818 20:08:11.400315   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:11.415480   74389 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:11.429815   74389 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:11.585119   74389 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:11.716996   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:11.731669   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:11.751706   74389 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0818 20:08:11.751764   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.762316   74389 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:11.762373   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.773065   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.786513   74389 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:11.798764   74389 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:11.810236   74389 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:11.820137   74389 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:11.820206   74389 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:11.836845   74389 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:11.850640   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:11.967429   74389 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:12.107091   74389 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:12.107168   74389 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:12.112112   74389 start.go:563] Will wait 60s for crictl version
	I0818 20:08:12.112193   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:12.115988   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:12.165396   74389 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:12.165481   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.195005   74389 ssh_runner.go:195] Run: crio --version
	I0818 20:08:12.228005   74389 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0818 20:08:09.234086   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:11.732954   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:10.771461   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Start
	I0818 20:08:10.771638   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring networks are active...
	I0818 20:08:10.772332   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network default is active
	I0818 20:08:10.772645   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Ensuring network mk-default-k8s-diff-port-852598 is active
	I0818 20:08:10.773119   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Getting domain xml...
	I0818 20:08:10.773840   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Creating domain...
	I0818 20:08:12.058765   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting to get IP...
	I0818 20:08:12.059745   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.060236   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.060152   75353 retry.go:31] will retry after 227.793826ms: waiting for machine to come up
	I0818 20:08:12.289622   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290038   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.290061   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.290013   75353 retry.go:31] will retry after 288.501286ms: waiting for machine to come up
	I0818 20:08:12.580672   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581158   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:12.581183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:12.581120   75353 retry.go:31] will retry after 460.489481ms: waiting for machine to come up
	I0818 20:08:12.229512   74389 main.go:141] libmachine: (old-k8s-version-247539) Calling .GetIP
	I0818 20:08:12.232830   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233299   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:f6:41", ip: ""} in network mk-old-k8s-version-247539: {Iface:virbr2 ExpiryTime:2024-08-18 21:08:02 +0000 UTC Type:0 Mac:52:54:00:5a:f6:41 Iaid: IPaddr:192.168.50.105 Prefix:24 Hostname:old-k8s-version-247539 Clientid:01:52:54:00:5a:f6:41}
	I0818 20:08:12.233328   74389 main.go:141] libmachine: (old-k8s-version-247539) DBG | domain old-k8s-version-247539 has defined IP address 192.168.50.105 and MAC address 52:54:00:5a:f6:41 in network mk-old-k8s-version-247539
	I0818 20:08:12.233562   74389 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:12.237890   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:12.250838   74389 kubeadm.go:883] updating cluster {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:12.250937   74389 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 20:08:12.250977   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:12.301003   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:12.301057   74389 ssh_runner.go:195] Run: which lz4
	I0818 20:08:12.305502   74389 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:12.309800   74389 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:12.309837   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0818 20:08:14.000765   74389 crio.go:462] duration metric: took 1.695296357s to copy over tarball
	I0818 20:08:14.000849   74389 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:13.736819   73815 pod_ready.go:103] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:14.732761   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:14.732783   73815 pod_ready.go:82] duration metric: took 9.506980075s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:14.732792   73815 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:16.739855   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:13.042839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043444   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.043475   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.043413   75353 retry.go:31] will retry after 542.076458ms: waiting for machine to come up
	I0818 20:08:13.586675   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587296   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:13.587326   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:13.587216   75353 retry.go:31] will retry after 553.588704ms: waiting for machine to come up
	I0818 20:08:14.142076   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142714   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.142737   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.142616   75353 retry.go:31] will retry after 852.179264ms: waiting for machine to come up
	I0818 20:08:14.996732   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997226   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:14.997258   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:14.997175   75353 retry.go:31] will retry after 732.180291ms: waiting for machine to come up
	I0818 20:08:15.731247   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731741   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:15.731771   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:15.731699   75353 retry.go:31] will retry after 1.456328641s: waiting for machine to come up
	I0818 20:08:17.189586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:17.190071   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:17.189997   75353 retry.go:31] will retry after 1.632315907s: waiting for machine to come up
	I0818 20:08:16.899673   74389 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.898792062s)
	I0818 20:08:16.899706   74389 crio.go:469] duration metric: took 2.898910786s to extract the tarball
	I0818 20:08:16.899715   74389 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:16.942226   74389 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:16.980974   74389 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0818 20:08:16.981000   74389 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:16.981097   74389 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.981130   74389 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.981154   74389 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0818 20:08:16.981209   74389 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.981233   74389 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.981241   74389 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:16.981158   74389 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.981098   74389 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:16.982836   74389 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:16.982808   74389 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:16.982810   74389 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:16.982814   74389 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0818 20:08:16.982820   74389 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:16.982878   74389 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.116211   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.125641   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.153287   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0818 20:08:17.183284   74389 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0818 20:08:17.183349   74389 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.183413   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.184601   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.186783   74389 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0818 20:08:17.186817   74389 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.186850   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.225404   74389 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0818 20:08:17.225448   74389 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0818 20:08:17.225466   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.225487   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251219   74389 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0818 20:08:17.251266   74389 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.251283   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.251305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.251333   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.275534   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.315800   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.324140   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.324943   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.331566   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.331634   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.349556   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.357897   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0818 20:08:17.463529   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0818 20:08:17.498215   74389 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0818 20:08:17.498258   74389 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.498305   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.498352   74389 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0818 20:08:17.498366   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0818 20:08:17.498388   74389 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.498309   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.498436   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.532772   74389 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0818 20:08:17.532820   74389 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.532839   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0818 20:08:17.532872   74389 ssh_runner.go:195] Run: which crictl
	I0818 20:08:17.573888   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0818 20:08:17.579642   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0818 20:08:17.579736   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.579764   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0818 20:08:17.579777   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.579805   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.655836   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0818 20:08:17.655926   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.675115   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.675123   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.712378   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0818 20:08:17.743602   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0818 20:08:17.743722   74389 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0818 20:08:17.780082   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0818 20:08:17.797560   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0818 20:08:17.809801   74389 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0818 20:08:17.902291   74389 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:18.047551   74389 cache_images.go:92] duration metric: took 1.066518876s to LoadCachedImages
	W0818 20:08:18.047643   74389 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0818 20:08:18.047659   74389 kubeadm.go:934] updating node { 192.168.50.105 8443 v1.20.0 crio true true} ...
	I0818 20:08:18.047819   74389 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-247539 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:18.047909   74389 ssh_runner.go:195] Run: crio config
	I0818 20:08:18.095513   74389 cni.go:84] Creating CNI manager for ""
	I0818 20:08:18.095541   74389 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:18.095557   74389 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:18.095582   74389 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.105 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-247539 NodeName:old-k8s-version-247539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 20:08:18.095762   74389 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-247539"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:18.095836   74389 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 20:08:18.106033   74389 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:18.106112   74389 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:18.116896   74389 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0818 20:08:18.134704   74389 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:18.151428   74389 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0818 20:08:18.170826   74389 ssh_runner.go:195] Run: grep 192.168.50.105	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:18.174916   74389 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:18.187583   74389 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:18.322839   74389 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:18.348693   74389 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539 for IP: 192.168.50.105
	I0818 20:08:18.348719   74389 certs.go:194] generating shared ca certs ...
	I0818 20:08:18.348738   74389 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.348901   74389 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:18.348939   74389 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:18.348949   74389 certs.go:256] generating profile certs ...
	I0818 20:08:18.349047   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/client.key
	I0818 20:08:18.349111   74389 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key.3812b43e
	I0818 20:08:18.349201   74389 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key
	I0818 20:08:18.349357   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:18.349396   74389 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:18.349406   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:18.349431   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:18.349465   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:18.349493   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:18.349542   74389 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:18.350419   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:18.397192   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:18.430700   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:18.457007   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:18.489024   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 20:08:18.531497   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:08:18.578412   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:18.617225   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/old-k8s-version-247539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:08:18.642453   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:18.666875   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:18.690391   74389 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:18.717403   74389 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:18.734896   74389 ssh_runner.go:195] Run: openssl version
	I0818 20:08:18.741161   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:18.752692   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757471   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.757551   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:18.763551   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:18.775247   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:18.787681   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792277   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.792319   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:18.798030   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:18.810440   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:18.821861   74389 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826722   74389 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.826809   74389 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:18.833063   74389 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:18.845691   74389 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:18.850338   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:18.856317   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:18.862558   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:18.868624   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:18.874496   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:18.880299   74389 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:18.886142   74389 kubeadm.go:392] StartCluster: {Name:old-k8s-version-247539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-247539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.105 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:18.886233   74389 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:18.886280   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:18.925747   74389 cri.go:89] found id: ""
	I0818 20:08:18.925809   74389 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:18.936769   74389 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:18.936791   74389 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:18.936842   74389 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:18.946856   74389 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:18.948418   74389 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-247539" does not appear in /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:08:18.950629   74389 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-7747/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-247539" cluster setting kubeconfig missing "old-k8s-version-247539" context setting]
	I0818 20:08:18.952703   74389 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:18.962143   74389 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:18.974522   74389 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.105
	I0818 20:08:18.974554   74389 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:18.974566   74389 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:18.974622   74389 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:19.016008   74389 cri.go:89] found id: ""
	I0818 20:08:19.016085   74389 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:19.035499   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:19.047054   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:19.047077   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:19.047120   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:08:19.058178   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:19.058261   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:19.068528   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:08:19.077871   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:19.077927   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:19.087488   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.097066   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:19.097138   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:19.106960   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:08:19.117536   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:19.117599   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:19.128539   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:19.139578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:19.268395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.321878   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05343986s)
	I0818 20:08:20.321914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.552200   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.660998   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:20.773769   74389 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:20.773856   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:18.740885   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:21.239526   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:18.824458   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824827   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:18.824859   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:18.824772   75353 retry.go:31] will retry after 2.077122736s: waiting for machine to come up
	I0818 20:08:20.903734   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904176   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:20.904203   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:20.904139   75353 retry.go:31] will retry after 1.975638775s: waiting for machine to come up
	I0818 20:08:21.274237   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:21.773994   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.274943   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:22.773907   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.773896   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.274570   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:24.774313   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.274239   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:25.774772   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:23.239765   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:25.739127   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:22.882020   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882511   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:22.882538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:22.882450   75353 retry.go:31] will retry after 3.362090127s: waiting for machine to come up
	I0818 20:08:26.246148   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246523   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | unable to find current IP address of domain default-k8s-diff-port-852598 in network mk-default-k8s-diff-port-852598
	I0818 20:08:26.246547   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | I0818 20:08:26.246479   75353 retry.go:31] will retry after 3.188423251s: waiting for machine to come up
	I0818 20:08:30.732227   73711 start.go:364] duration metric: took 52.90798246s to acquireMachinesLock for "no-preload-944426"
	I0818 20:08:30.732291   73711 start.go:96] Skipping create...Using existing machine configuration
	I0818 20:08:30.732302   73711 fix.go:54] fixHost starting: 
	I0818 20:08:30.732702   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:08:30.732738   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:08:30.749873   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I0818 20:08:30.750371   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:08:30.750922   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:08:30.750951   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:08:30.751323   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:08:30.751547   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:30.751748   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:08:30.753437   73711 fix.go:112] recreateIfNeeded on no-preload-944426: state=Stopped err=<nil>
	I0818 20:08:30.753460   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	W0818 20:08:30.753623   73711 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 20:08:30.756026   73711 out.go:177] * Restarting existing kvm2 VM for "no-preload-944426" ...
	I0818 20:08:26.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:26.774664   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.274392   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:27.774835   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.274750   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:28.774874   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.274180   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.774226   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.274486   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:30.774515   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:29.438706   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439209   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Found IP for machine: 192.168.72.111
	I0818 20:08:29.439225   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserving static IP address...
	I0818 20:08:29.439241   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has current primary IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.439712   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.439740   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | skip adding static IP to network mk-default-k8s-diff-port-852598 - found existing host DHCP lease matching {name: "default-k8s-diff-port-852598", mac: "52:54:00:14:a7:8a", ip: "192.168.72.111"}
	I0818 20:08:29.439754   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Reserved static IP address: 192.168.72.111
	I0818 20:08:29.439769   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Waiting for SSH to be available...
	I0818 20:08:29.439786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Getting to WaitForSSH function...
	I0818 20:08:29.442039   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442351   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.442378   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.442515   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH client type: external
	I0818 20:08:29.442545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa (-rw-------)
	I0818 20:08:29.442569   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.111 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:29.442580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | About to run SSH command:
	I0818 20:08:29.442592   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | exit 0
	I0818 20:08:29.567586   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:29.567935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetConfigRaw
	I0818 20:08:29.568553   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.570763   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571150   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.571183   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.571367   74485 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/config.json ...
	I0818 20:08:29.571585   74485 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:29.571608   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:29.571839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.574102   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574560   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.574598   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.574753   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.574920   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.575219   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.575421   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.575610   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.575623   74485 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:29.683677   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:29.683705   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.683980   74485 buildroot.go:166] provisioning hostname "default-k8s-diff-port-852598"
	I0818 20:08:29.684010   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.684210   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.687062   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687490   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.687518   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.687656   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.687817   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.687954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.688105   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.688270   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.688444   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.688457   74485 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-852598 && echo "default-k8s-diff-port-852598" | sudo tee /etc/hostname
	I0818 20:08:29.810790   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-852598
	
	I0818 20:08:29.810821   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.813448   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813839   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.813868   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.813992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:29.814159   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814322   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:29.814457   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:29.814613   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:29.814821   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:29.814847   74485 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-852598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-852598/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-852598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:29.934730   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:29.934762   74485 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:29.934818   74485 buildroot.go:174] setting up certificates
	I0818 20:08:29.934834   74485 provision.go:84] configureAuth start
	I0818 20:08:29.934848   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetMachineName
	I0818 20:08:29.935133   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:29.938004   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938365   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.938385   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.938612   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:29.940910   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941267   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:29.941298   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:29.941376   74485 provision.go:143] copyHostCerts
	I0818 20:08:29.941429   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:29.941446   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:29.941498   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:29.941583   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:29.941591   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:29.941609   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:29.941657   74485 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:29.941664   74485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:29.941683   74485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:29.941726   74485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-852598 san=[127.0.0.1 192.168.72.111 default-k8s-diff-port-852598 localhost minikube]
	I0818 20:08:30.047223   74485 provision.go:177] copyRemoteCerts
	I0818 20:08:30.047284   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:30.047310   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.049891   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050165   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.050195   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.050394   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.050580   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.050750   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.050910   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.133873   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:30.158887   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0818 20:08:30.183930   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 20:08:30.208851   74485 provision.go:87] duration metric: took 274.002401ms to configureAuth
	I0818 20:08:30.208888   74485 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:30.209075   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:30.209144   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.211913   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212274   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.212305   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.212521   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.212718   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.212897   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.213060   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.213313   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.213531   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.213564   74485 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:30.490496   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:30.490524   74485 machine.go:96] duration metric: took 918.924484ms to provisionDockerMachine
	I0818 20:08:30.490541   74485 start.go:293] postStartSetup for "default-k8s-diff-port-852598" (driver="kvm2")
	I0818 20:08:30.490555   74485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:30.490576   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.490879   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:30.490904   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.493538   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.493863   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.493894   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.494015   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.494211   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.494367   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.494513   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.582020   74485 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:30.586488   74485 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:30.586510   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:30.586568   74485 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:30.586656   74485 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:30.586743   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:30.595907   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:30.619808   74485 start.go:296] duration metric: took 129.254668ms for postStartSetup
	I0818 20:08:30.619842   74485 fix.go:56] duration metric: took 19.875457987s for fixHost
	I0818 20:08:30.619861   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.622487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622802   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.622836   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.622978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.623181   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623338   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.623489   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.623663   74485 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:30.623819   74485 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.111 22 <nil> <nil>}
	I0818 20:08:30.623829   74485 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:30.732011   74485 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011710.692571104
	
	I0818 20:08:30.732033   74485 fix.go:216] guest clock: 1724011710.692571104
	I0818 20:08:30.732040   74485 fix.go:229] Guest: 2024-08-18 20:08:30.692571104 +0000 UTC Remote: 2024-08-18 20:08:30.619845545 +0000 UTC m=+227.865652589 (delta=72.725559ms)
	I0818 20:08:30.732088   74485 fix.go:200] guest clock delta is within tolerance: 72.725559ms
	I0818 20:08:30.732098   74485 start.go:83] releasing machines lock for "default-k8s-diff-port-852598", held for 19.987759602s
	I0818 20:08:30.732126   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.732380   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:30.735249   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735696   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.735724   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.735987   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736665   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736886   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:08:30.736961   74485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:30.737002   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.737212   74485 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:30.737240   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:08:30.740016   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740246   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740447   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740470   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740646   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740650   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:30.740739   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:30.740949   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:08:30.740956   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741415   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:08:30.741427   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741545   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:08:30.741608   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.741699   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:08:30.821128   74485 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:30.848919   74485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:30.997885   74485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:31.004578   74485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:31.004656   74485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:31.023770   74485 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:31.023801   74485 start.go:495] detecting cgroup driver to use...
	I0818 20:08:31.023873   74485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:31.040507   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:31.054848   74485 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:31.054901   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:31.069584   74485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:31.089532   74485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:31.214560   74485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:31.394507   74485 docker.go:233] disabling docker service ...
	I0818 20:08:31.394571   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:31.411295   74485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:31.427312   74485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:31.547148   74485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:31.669942   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:31.686214   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:31.711412   74485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:31.711474   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.723281   74485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:31.723346   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.735488   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.748029   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.762456   74485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:31.779045   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.793816   74485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.816892   74485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:31.829236   74485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:31.842943   74485 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:31.843000   74485 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:31.858422   74485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:31.870179   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:32.003783   74485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:32.160300   74485 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:32.160368   74485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:32.165424   74485 start.go:563] Will wait 60s for crictl version
	I0818 20:08:32.165472   74485 ssh_runner.go:195] Run: which crictl
	I0818 20:08:32.169268   74485 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:32.211667   74485 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:32.211758   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.242366   74485 ssh_runner.go:195] Run: crio --version
	I0818 20:08:32.272343   74485 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:27.739698   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:30.239242   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.240089   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:32.273652   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetIP
	I0818 20:08:32.277017   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277362   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:08:32.277395   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:08:32.277654   74485 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:32.282225   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:32.306870   74485 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:32.306980   74485 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:32.307040   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:32.350393   74485 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:32.350473   74485 ssh_runner.go:195] Run: which lz4
	I0818 20:08:32.355129   74485 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0818 20:08:32.359816   74485 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0818 20:08:32.359839   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0818 20:08:30.757329   73711 main.go:141] libmachine: (no-preload-944426) Calling .Start
	I0818 20:08:30.757514   73711 main.go:141] libmachine: (no-preload-944426) Ensuring networks are active...
	I0818 20:08:30.758286   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network default is active
	I0818 20:08:30.758667   73711 main.go:141] libmachine: (no-preload-944426) Ensuring network mk-no-preload-944426 is active
	I0818 20:08:30.759084   73711 main.go:141] libmachine: (no-preload-944426) Getting domain xml...
	I0818 20:08:30.759889   73711 main.go:141] libmachine: (no-preload-944426) Creating domain...
	I0818 20:08:32.064235   73711 main.go:141] libmachine: (no-preload-944426) Waiting to get IP...
	I0818 20:08:32.065149   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.065617   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.065693   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.065614   75550 retry.go:31] will retry after 223.046315ms: waiting for machine to come up
	I0818 20:08:32.290000   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.290486   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.290517   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.290460   75550 retry.go:31] will retry after 359.595476ms: waiting for machine to come up
	I0818 20:08:32.652293   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:32.652922   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:32.652953   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:32.652891   75550 retry.go:31] will retry after 355.131428ms: waiting for machine to come up
	I0818 20:08:33.009174   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.009664   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.009692   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.009620   75550 retry.go:31] will retry after 433.765107ms: waiting for machine to come up
	I0818 20:08:33.445297   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.446028   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.446057   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.446005   75550 retry.go:31] will retry after 547.853366ms: waiting for machine to come up
	I0818 20:08:33.995808   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:33.996537   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:33.996569   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:33.996500   75550 retry.go:31] will retry after 830.882652ms: waiting for machine to come up
	I0818 20:08:34.828636   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:34.829139   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:34.829169   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:34.829088   75550 retry.go:31] will retry after 1.034176215s: waiting for machine to come up
	I0818 20:08:31.273969   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:31.774956   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.274942   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:32.773880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.274395   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:33.774217   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.273903   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.774024   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.274197   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:35.774641   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:34.240826   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:36.740440   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:33.831827   74485 crio.go:462] duration metric: took 1.476738272s to copy over tarball
	I0818 20:08:33.831892   74485 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0818 20:08:36.080107   74485 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.24818669s)
	I0818 20:08:36.080141   74485 crio.go:469] duration metric: took 2.248285769s to extract the tarball
	I0818 20:08:36.080159   74485 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0818 20:08:36.120912   74485 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:36.170431   74485 crio.go:514] all images are preloaded for cri-o runtime.
	I0818 20:08:36.170455   74485 cache_images.go:84] Images are preloaded, skipping loading
	I0818 20:08:36.170463   74485 kubeadm.go:934] updating node { 192.168.72.111 8444 v1.31.0 crio true true} ...
	I0818 20:08:36.170563   74485 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-852598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.111
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:08:36.170628   74485 ssh_runner.go:195] Run: crio config
	I0818 20:08:36.215464   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:36.215491   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:36.215504   74485 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:08:36.215528   74485 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.111 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-852598 NodeName:default-k8s-diff-port-852598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.111"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.111 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:08:36.215652   74485 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.111
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-852598"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.111
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.111"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:08:36.215718   74485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:08:36.227163   74485 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:08:36.227254   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:08:36.237577   74485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0818 20:08:36.254898   74485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:08:36.273530   74485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0818 20:08:36.290824   74485 ssh_runner.go:195] Run: grep 192.168.72.111	control-plane.minikube.internal$ /etc/hosts
	I0818 20:08:36.294542   74485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.111	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:36.306822   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:36.443673   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:08:36.461205   74485 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598 for IP: 192.168.72.111
	I0818 20:08:36.461232   74485 certs.go:194] generating shared ca certs ...
	I0818 20:08:36.461252   74485 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:08:36.461420   74485 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:08:36.461492   74485 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:08:36.461505   74485 certs.go:256] generating profile certs ...
	I0818 20:08:36.461621   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/client.key
	I0818 20:08:36.461717   74485 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key.44a0f5ad
	I0818 20:08:36.461783   74485 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key
	I0818 20:08:36.461930   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:08:36.461983   74485 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:08:36.461998   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:08:36.462026   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:08:36.462077   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:08:36.462112   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:08:36.462167   74485 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:36.462916   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:08:36.512610   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:08:36.558616   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:08:36.595755   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:08:36.638264   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0818 20:08:36.669336   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0818 20:08:36.692480   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:08:36.717235   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/default-k8s-diff-port-852598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 20:08:36.742220   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:08:36.765505   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:08:36.789279   74485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:08:36.813777   74485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:08:36.831256   74485 ssh_runner.go:195] Run: openssl version
	I0818 20:08:36.837184   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:08:36.848123   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853030   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.853089   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:08:36.859016   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:08:36.871084   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:08:36.882581   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.888943   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.889008   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:08:36.896841   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:08:36.911762   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:08:36.923029   74485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.927982   74485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.928039   74485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:08:36.934165   74485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:08:36.946794   74485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:08:36.951686   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:08:36.957905   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:08:36.964071   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:08:36.970369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:08:36.976369   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:08:36.982386   74485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:08:36.988286   74485 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-852598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-852598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:08:36.988382   74485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:08:36.988433   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.036383   74485 cri.go:89] found id: ""
	I0818 20:08:37.036472   74485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:08:37.047135   74485 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:08:37.047159   74485 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:08:37.047204   74485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:08:37.058133   74485 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:08:37.059236   74485 kubeconfig.go:125] found "default-k8s-diff-port-852598" server: "https://192.168.72.111:8444"
	I0818 20:08:37.061368   74485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:08:37.072922   74485 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.111
	I0818 20:08:37.072961   74485 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:08:37.072975   74485 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:08:37.073035   74485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:08:37.120622   74485 cri.go:89] found id: ""
	I0818 20:08:37.120713   74485 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:08:37.138564   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:08:37.149091   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:08:37.149114   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:08:37.149167   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:08:37.160298   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:08:37.160364   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:08:37.170717   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:08:37.180261   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:08:37.180337   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:08:37.190466   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.200331   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:08:37.200407   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:08:37.210729   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:08:37.220302   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:08:37.220379   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:08:37.230616   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:08:37.241303   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:37.365964   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:35.865644   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:35.866148   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:35.866176   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:35.866094   75550 retry.go:31] will retry after 1.30047863s: waiting for machine to come up
	I0818 20:08:37.168446   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:37.168947   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:37.168985   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:37.168886   75550 retry.go:31] will retry after 1.143148547s: waiting for machine to come up
	I0818 20:08:38.314142   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:38.314622   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:38.314645   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:38.314568   75550 retry.go:31] will retry after 2.106630797s: waiting for machine to come up
	I0818 20:08:36.274010   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:36.774120   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.274983   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:37.774103   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.274370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:38.774660   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.274054   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.774215   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.274334   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:40.774765   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.240817   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:41.741780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:38.322305   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.523945   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.627637   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:38.794218   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:08:38.794298   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.295075   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.795095   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:39.810749   74485 api_server.go:72] duration metric: took 1.016560665s to wait for apiserver process to appear ...
	I0818 20:08:39.810778   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:08:39.810802   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:39.811324   74485 api_server.go:269] stopped: https://192.168.72.111:8444/healthz: Get "https://192.168.72.111:8444/healthz": dial tcp 192.168.72.111:8444: connect: connection refused
	I0818 20:08:40.311081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.309160   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:08:42.309190   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:08:42.309206   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.364083   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.364123   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:42.364148   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.370890   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.370918   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:40.423364   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:40.423886   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:40.423909   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:40.423851   75550 retry.go:31] will retry after 2.350918177s: waiting for machine to come up
	I0818 20:08:42.776801   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:42.777407   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:42.777440   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:42.777361   75550 retry.go:31] will retry after 3.529824243s: waiting for machine to come up
	I0818 20:08:42.815322   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:42.823702   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:42.823738   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.311540   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.317503   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.317537   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:43.810955   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:43.816976   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:43.817005   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.311718   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.316009   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.316038   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:44.811634   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:44.816069   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:44.816095   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.311732   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.317099   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:08:45.317122   74485 api_server.go:103] status: https://192.168.72.111:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:08:45.811063   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:08:45.815319   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:08:45.821699   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:08:45.821728   74485 api_server.go:131] duration metric: took 6.010942001s to wait for apiserver health ...
	I0818 20:08:45.821739   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:08:45.821774   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:08:45.823968   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:08:41.274803   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:41.774855   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.274721   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:42.774456   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.274042   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:43.774048   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.274465   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.774252   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.274602   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:45.774370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:44.239827   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:46.240539   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:45.825235   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:08:45.836398   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:08:45.854746   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:08:45.866305   74485 system_pods.go:59] 8 kube-system pods found
	I0818 20:08:45.866335   74485 system_pods.go:61] "coredns-6f6b679f8f-zfdn9" [8ed412a0-912d-4619-a2d8-2378f921037b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:08:45.866344   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [efa18356-f8dd-4fe4-acc6-59f859e7becf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:08:45.866351   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [b92f2056-c5b6-4a2f-8519-a83b2350866f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:08:45.866359   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [7eb6a474-891d-442e-bd85-4ca766312f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:08:45.866365   74485 system_pods.go:61] "kube-proxy-h8bpj" [472e231d-df71-44d6-8873-23d7e43d43d2] Running
	I0818 20:08:45.866375   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [43dccb14-0125-4d48-9537-8a87c865b586] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:08:45.866381   74485 system_pods.go:61] "metrics-server-6867b74b74-brqj6" [de1c0894-2b42-4728-bf63-bea36c5aa0d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:08:45.866387   74485 system_pods.go:61] "storage-provisioner" [41499d9e-d3cf-4dbc-9464-998a1f2c6186] Running
	I0818 20:08:45.866395   74485 system_pods.go:74] duration metric: took 11.62616ms to wait for pod list to return data ...
	I0818 20:08:45.866411   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:08:45.870540   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:08:45.870564   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:08:45.870578   74485 node_conditions.go:105] duration metric: took 4.15805ms to run NodePressure ...
	I0818 20:08:45.870597   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:08:46.138555   74485 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142738   74485 kubeadm.go:739] kubelet initialised
	I0818 20:08:46.142758   74485 kubeadm.go:740] duration metric: took 4.173219ms waiting for restarted kubelet to initialise ...
	I0818 20:08:46.142765   74485 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:08:46.147199   74485 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.151726   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151751   74485 pod_ready.go:82] duration metric: took 4.528706ms for pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.151762   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "coredns-6f6b679f8f-zfdn9" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.151770   74485 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.155962   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.155984   74485 pod_ready.go:82] duration metric: took 4.203038ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.155996   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.156002   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.159739   74485 pod_ready.go:98] node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159759   74485 pod_ready.go:82] duration metric: took 3.749616ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	E0818 20:08:46.159769   74485 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-852598" hosting pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-852598" has status "Ready":"False"
	I0818 20:08:46.159777   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:46.309056   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:46.309441   73711 main.go:141] libmachine: (no-preload-944426) DBG | unable to find current IP address of domain no-preload-944426 in network mk-no-preload-944426
	I0818 20:08:46.309470   73711 main.go:141] libmachine: (no-preload-944426) DBG | I0818 20:08:46.309395   75550 retry.go:31] will retry after 3.741295193s: waiting for machine to come up
	I0818 20:08:50.052617   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053049   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has current primary IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.053070   73711 main.go:141] libmachine: (no-preload-944426) Found IP for machine: 192.168.61.228
	I0818 20:08:50.053083   73711 main.go:141] libmachine: (no-preload-944426) Reserving static IP address...
	I0818 20:08:50.053446   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.053467   73711 main.go:141] libmachine: (no-preload-944426) Reserved static IP address: 192.168.61.228
	I0818 20:08:50.053484   73711 main.go:141] libmachine: (no-preload-944426) DBG | skip adding static IP to network mk-no-preload-944426 - found existing host DHCP lease matching {name: "no-preload-944426", mac: "52:54:00:51:87:4a", ip: "192.168.61.228"}
	I0818 20:08:50.053498   73711 main.go:141] libmachine: (no-preload-944426) DBG | Getting to WaitForSSH function...
	I0818 20:08:50.053510   73711 main.go:141] libmachine: (no-preload-944426) Waiting for SSH to be available...
	I0818 20:08:50.055459   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055790   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.055822   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.055911   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH client type: external
	I0818 20:08:50.055939   73711 main.go:141] libmachine: (no-preload-944426) DBG | Using SSH private key: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa (-rw-------)
	I0818 20:08:50.055971   73711 main.go:141] libmachine: (no-preload-944426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0818 20:08:50.055986   73711 main.go:141] libmachine: (no-preload-944426) DBG | About to run SSH command:
	I0818 20:08:50.055998   73711 main.go:141] libmachine: (no-preload-944426) DBG | exit 0
	I0818 20:08:50.175717   73711 main.go:141] libmachine: (no-preload-944426) DBG | SSH cmd err, output: <nil>: 
	I0818 20:08:50.176077   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetConfigRaw
	I0818 20:08:50.176705   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.179072   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179455   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.179486   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.179712   73711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/config.json ...
	I0818 20:08:50.179900   73711 machine.go:93] provisionDockerMachine start ...
	I0818 20:08:50.179923   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:50.180128   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.182300   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182679   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.182707   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.182822   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.183009   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183138   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.183292   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.183455   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.183613   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.183623   73711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 20:08:46.274398   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:46.774295   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.274412   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:47.774752   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.274754   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.774243   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.274501   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:49.773923   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.274017   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:50.774729   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:48.739015   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.741282   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:48.165270   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.166500   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:52.667585   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:50.284037   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0818 20:08:50.284069   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284354   73711 buildroot.go:166] provisioning hostname "no-preload-944426"
	I0818 20:08:50.284383   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.284503   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.287412   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287774   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.287814   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.287965   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.288164   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288352   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.288509   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.288669   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.288869   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.288889   73711 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-944426 && echo "no-preload-944426" | sudo tee /etc/hostname
	I0818 20:08:50.407844   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-944426
	
	I0818 20:08:50.407877   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.410740   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411115   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.411156   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.411402   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.411612   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411760   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.411869   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.412073   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.412277   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.412299   73711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-944426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-944426/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-944426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 20:08:50.521359   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 20:08:50.521388   73711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-7747/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-7747/.minikube}
	I0818 20:08:50.521456   73711 buildroot.go:174] setting up certificates
	I0818 20:08:50.521467   73711 provision.go:84] configureAuth start
	I0818 20:08:50.521481   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetMachineName
	I0818 20:08:50.521824   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:50.524572   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.524975   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.525002   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.525211   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.527350   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527669   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.527697   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.527790   73711 provision.go:143] copyHostCerts
	I0818 20:08:50.527856   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem, removing ...
	I0818 20:08:50.527872   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem
	I0818 20:08:50.527924   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/ca.pem (1082 bytes)
	I0818 20:08:50.528038   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem, removing ...
	I0818 20:08:50.528047   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem
	I0818 20:08:50.528065   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/cert.pem (1123 bytes)
	I0818 20:08:50.528119   73711 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem, removing ...
	I0818 20:08:50.528126   73711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem
	I0818 20:08:50.528143   73711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-7747/.minikube/key.pem (1675 bytes)
	I0818 20:08:50.528192   73711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem org=jenkins.no-preload-944426 san=[127.0.0.1 192.168.61.228 localhost minikube no-preload-944426]
	I0818 20:08:50.740892   73711 provision.go:177] copyRemoteCerts
	I0818 20:08:50.740964   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 20:08:50.740991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.743676   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744029   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.744059   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.744260   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.744494   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.744681   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.744848   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:50.826364   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 20:08:50.858459   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 20:08:50.890910   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 20:08:50.918703   73711 provision.go:87] duration metric: took 397.222917ms to configureAuth
	I0818 20:08:50.918730   73711 buildroot.go:189] setting minikube options for container-runtime
	I0818 20:08:50.918947   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:08:50.919029   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:50.922219   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922549   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:50.922573   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:50.922762   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:50.922991   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923166   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:50.923300   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:50.923475   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:50.923683   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:50.923700   73711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0818 20:08:51.193561   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0818 20:08:51.193588   73711 machine.go:96] duration metric: took 1.013672792s to provisionDockerMachine
	I0818 20:08:51.193603   73711 start.go:293] postStartSetup for "no-preload-944426" (driver="kvm2")
	I0818 20:08:51.193616   73711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 20:08:51.193660   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.194032   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 20:08:51.194060   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.196422   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196712   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.196747   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.196900   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.197046   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.197157   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.197325   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.279007   73711 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 20:08:51.283324   73711 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 20:08:51.283344   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/addons for local assets ...
	I0818 20:08:51.283424   73711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-7747/.minikube/files for local assets ...
	I0818 20:08:51.283524   73711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem -> 149342.pem in /etc/ssl/certs
	I0818 20:08:51.283641   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 20:08:51.293489   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:08:51.317415   73711 start.go:296] duration metric: took 123.797891ms for postStartSetup
	I0818 20:08:51.317455   73711 fix.go:56] duration metric: took 20.58515233s for fixHost
	I0818 20:08:51.317479   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.320161   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320452   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.320481   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.320667   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.320853   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321027   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.321171   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.321322   73711 main.go:141] libmachine: Using SSH client type: native
	I0818 20:08:51.321505   73711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.228 22 <nil> <nil>}
	I0818 20:08:51.321517   73711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 20:08:51.420193   73711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724011731.395088538
	
	I0818 20:08:51.420216   73711 fix.go:216] guest clock: 1724011731.395088538
	I0818 20:08:51.420223   73711 fix.go:229] Guest: 2024-08-18 20:08:51.395088538 +0000 UTC Remote: 2024-08-18 20:08:51.317459873 +0000 UTC m=+356.082724848 (delta=77.628665ms)
	I0818 20:08:51.420240   73711 fix.go:200] guest clock delta is within tolerance: 77.628665ms
	I0818 20:08:51.420256   73711 start.go:83] releasing machines lock for "no-preload-944426", held for 20.687989837s
	I0818 20:08:51.420273   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.420534   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:51.423567   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.423861   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.423888   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.424052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424528   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424690   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:08:51.424777   73711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 20:08:51.424825   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.424916   73711 ssh_runner.go:195] Run: cat /version.json
	I0818 20:08:51.424945   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:08:51.427482   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427714   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427786   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.427813   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.427962   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428080   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:51.428109   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:51.428146   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428283   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:08:51.428342   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428441   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:08:51.428532   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.428600   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:08:51.428707   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:08:51.528038   73711 ssh_runner.go:195] Run: systemctl --version
	I0818 20:08:51.534231   73711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0818 20:08:51.683823   73711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 20:08:51.690823   73711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 20:08:51.690901   73711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 20:08:51.707356   73711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0818 20:08:51.707389   73711 start.go:495] detecting cgroup driver to use...
	I0818 20:08:51.707459   73711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 20:08:51.723884   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 20:08:51.737661   73711 docker.go:217] disabling cri-docker service (if available) ...
	I0818 20:08:51.737715   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 20:08:51.751187   73711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 20:08:51.764367   73711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 20:08:51.881664   73711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 20:08:52.022183   73711 docker.go:233] disabling docker service ...
	I0818 20:08:52.022250   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 20:08:52.037108   73711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 20:08:52.050404   73711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 20:08:52.190167   73711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 20:08:52.325569   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 20:08:52.339546   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 20:08:52.358427   73711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0818 20:08:52.358487   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.369570   73711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0818 20:08:52.369629   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.382786   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.396845   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.407797   73711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 20:08:52.418649   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.428822   73711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.445799   73711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0818 20:08:52.455730   73711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 20:08:52.464898   73711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0818 20:08:52.464951   73711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0818 20:08:52.477249   73711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 20:08:52.487204   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:08:52.608922   73711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0818 20:08:52.753849   73711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0818 20:08:52.753918   73711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0818 20:08:52.759116   73711 start.go:563] Will wait 60s for crictl version
	I0818 20:08:52.759175   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:52.763674   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 20:08:52.806016   73711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0818 20:08:52.806106   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.833670   73711 ssh_runner.go:195] Run: crio --version
	I0818 20:08:52.864310   73711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0818 20:08:52.865447   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetIP
	I0818 20:08:52.868265   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868667   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:08:52.868699   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:08:52.868900   73711 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0818 20:08:52.873656   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:08:52.887328   73711 kubeadm.go:883] updating cluster {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 20:08:52.887505   73711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 20:08:52.887553   73711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 20:08:52.923999   73711 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0818 20:08:52.924025   73711 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0818 20:08:52.924090   73711 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.924097   73711 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.924113   73711 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.924147   73711 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.924216   73711 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.924239   73711 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:52.924305   73711 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.924390   73711 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:52.925959   73711 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:52.925984   73711 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:52.926002   73711 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:52.925994   73711 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0818 20:08:52.926011   73711 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:52.926053   73711 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:52.926291   73711 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.117679   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157566   73711 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0818 20:08:53.157608   73711 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.157655   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.158464   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.161938   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217317   73711 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0818 20:08:53.217374   73711 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.217419   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.217427   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.229954   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0818 20:08:53.253154   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0818 20:08:53.253209   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.261450   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.269598   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.270354   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.270401   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.421994   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0818 20:08:53.422048   73711 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0818 20:08:53.422139   73711 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.422182   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.422195   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.422052   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.446061   73711 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0818 20:08:53.446101   73711 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.446100   73711 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0818 20:08:53.446114   73711 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0818 20:08:53.446158   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446201   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.446161   73711 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.446130   73711 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.446250   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.446280   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:53.474921   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0818 20:08:53.474936   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0818 20:08:53.474953   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474995   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0818 20:08:53.474999   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:53.505782   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:53.505904   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:53.505934   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:53.799739   73711 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:51.273895   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:51.773932   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.274544   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:52.774320   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.274698   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.774816   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.274579   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:54.774406   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.274940   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:55.774219   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:53.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.740857   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:55.167350   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.168652   74485 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:57.666744   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.666779   74485 pod_ready.go:82] duration metric: took 11.506987195s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.666802   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671280   74485 pod_ready.go:93] pod "kube-proxy-h8bpj" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.671302   74485 pod_ready.go:82] duration metric: took 4.49242ms for pod "kube-proxy-h8bpj" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.671311   74485 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675745   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:08:57.675765   74485 pod_ready.go:82] duration metric: took 4.446707ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:57.675779   74485 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	I0818 20:08:55.497054   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.022032642s)
	I0818 20:08:55.497090   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0818 20:08:55.497116   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0: (2.022155942s)
	I0818 20:08:55.497157   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1: (2.022131358s)
	I0818 20:08:55.497168   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0818 20:08:55.497227   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.497273   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.497313   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0: (1.991355489s)
	I0818 20:08:55.497274   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0: (1.991406662s)
	I0818 20:08:55.497362   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.497369   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.497393   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (1.991466215s)
	I0818 20:08:55.497409   73711 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.697646009s)
	I0818 20:08:55.497439   73711 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0818 20:08:55.497455   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0818 20:08:55.497468   73711 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.497504   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:08:55.590490   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0818 20:08:55.608567   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.608583   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0818 20:08:55.608658   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0818 20:08:55.608707   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0818 20:08:55.608728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0818 20:08:55.608741   73711 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.608756   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:55.608768   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0818 20:08:55.660747   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0818 20:08:55.660856   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:08:55.701347   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0818 20:08:55.701376   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:55.701433   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:08:55.717056   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0818 20:08:55.717159   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:08:59.680640   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.071854332s)
	I0818 20:08:59.680673   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0818 20:08:59.680700   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (4.071919945s)
	I0818 20:08:59.680728   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0818 20:08:59.680739   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680755   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1: (4.019877135s)
	I0818 20:08:59.680781   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0818 20:08:59.680792   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.97939667s)
	I0818 20:08:59.680802   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0818 20:08:59.680818   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0: (3.979373996s)
	I0818 20:08:59.680833   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0818 20:08:59.680847   73711 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:08:59.680876   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (3.96370085s)
	I0818 20:08:59.680895   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0818 20:08:56.274608   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:56.774444   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.274076   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:57.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.274722   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.773954   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.274617   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:59.774003   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.274400   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:00.774164   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:08:58.241463   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:00.241492   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:08:59.683057   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:02.183113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:01.753708   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (2.072881673s)
	I0818 20:09:01.753739   73711 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.072859667s)
	I0818 20:09:01.753786   73711 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0818 20:09:01.753747   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0818 20:09:01.753866   73711 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:01.753870   73711 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:01.753922   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0818 20:09:03.515107   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.761161853s)
	I0818 20:09:03.515136   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0818 20:09:03.515142   73711 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.761255334s)
	I0818 20:09:03.515162   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:03.515170   73711 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0818 20:09:03.515223   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0818 20:09:01.274971   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:01.774764   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.274293   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.774328   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.274089   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:03.774485   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.274355   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:04.774667   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.274525   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:05.774919   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:02.741235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.910002   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.239901   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:04.682962   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:07.183678   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:05.463531   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.948279133s)
	I0818 20:09:05.463559   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0818 20:09:05.463585   73711 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:05.463629   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0818 20:09:07.525332   73711 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.061676855s)
	I0818 20:09:07.525365   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0818 20:09:07.525401   73711 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:07.525473   73711 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0818 20:09:08.178855   73711 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19423-7747/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0818 20:09:08.178894   73711 cache_images.go:123] Successfully loaded all cached images
	I0818 20:09:08.178900   73711 cache_images.go:92] duration metric: took 15.254860831s to LoadCachedImages
	I0818 20:09:08.178915   73711 kubeadm.go:934] updating node { 192.168.61.228 8443 v1.31.0 crio true true} ...
	I0818 20:09:08.179070   73711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-944426 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 20:09:08.179163   73711 ssh_runner.go:195] Run: crio config
	I0818 20:09:08.229392   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:08.229418   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:08.229429   73711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 20:09:08.229453   73711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.228 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-944426 NodeName:no-preload-944426 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 20:09:08.229598   73711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-944426"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 20:09:08.229657   73711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 20:09:08.240023   73711 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 20:09:08.240121   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 20:09:08.249808   73711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 20:09:08.266663   73711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 20:09:08.284042   73711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0818 20:09:08.302210   73711 ssh_runner.go:195] Run: grep 192.168.61.228	control-plane.minikube.internal$ /etc/hosts
	I0818 20:09:08.306321   73711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 20:09:08.318674   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:08.437701   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:08.462861   73711 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426 for IP: 192.168.61.228
	I0818 20:09:08.462889   73711 certs.go:194] generating shared ca certs ...
	I0818 20:09:08.462909   73711 certs.go:226] acquiring lock for ca certs: {Name:mka2b3e7c58af327d3df7eeeef3669cd7427d211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:08.463099   73711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key
	I0818 20:09:08.463166   73711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key
	I0818 20:09:08.463178   73711 certs.go:256] generating profile certs ...
	I0818 20:09:08.463297   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/client.key
	I0818 20:09:08.463400   73711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key.ec9e396f
	I0818 20:09:08.463459   73711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key
	I0818 20:09:08.463622   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem (1338 bytes)
	W0818 20:09:08.463663   73711 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934_empty.pem, impossibly tiny 0 bytes
	I0818 20:09:08.463676   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 20:09:08.463718   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/ca.pem (1082 bytes)
	I0818 20:09:08.463748   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/cert.pem (1123 bytes)
	I0818 20:09:08.463780   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/certs/key.pem (1675 bytes)
	I0818 20:09:08.463827   73711 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem (1708 bytes)
	I0818 20:09:08.464500   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 20:09:08.497860   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 20:09:08.550536   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 20:09:08.593972   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0818 20:09:08.625691   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 20:09:08.652285   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 20:09:08.676175   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 20:09:08.703870   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/no-preload-944426/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 20:09:08.729102   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/ssl/certs/149342.pem --> /usr/share/ca-certificates/149342.pem (1708 bytes)
	I0818 20:09:08.758017   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 20:09:08.783528   73711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-7747/.minikube/certs/14934.pem --> /usr/share/ca-certificates/14934.pem (1338 bytes)
	I0818 20:09:08.808211   73711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 20:09:08.825465   73711 ssh_runner.go:195] Run: openssl version
	I0818 20:09:08.831856   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/149342.pem && ln -fs /usr/share/ca-certificates/149342.pem /etc/ssl/certs/149342.pem"
	I0818 20:09:08.843336   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847774   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:51 /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.847824   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/149342.pem
	I0818 20:09:08.854110   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/149342.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 20:09:08.865279   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 20:09:08.876107   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880723   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.880786   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 20:09:08.886526   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 20:09:08.898139   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14934.pem && ln -fs /usr/share/ca-certificates/14934.pem /etc/ssl/certs/14934.pem"
	I0818 20:09:08.909258   73711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.913957   73711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:51 /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.914015   73711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14934.pem
	I0818 20:09:08.919888   73711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14934.pem /etc/ssl/certs/51391683.0"
	I0818 20:09:08.933118   73711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 20:09:08.937979   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 20:09:08.944427   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 20:09:08.950686   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 20:09:08.956949   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 20:09:08.963201   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 20:09:08.969284   73711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 20:09:08.975411   73711 kubeadm.go:392] StartCluster: {Name:no-preload-944426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-944426 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 20:09:08.975501   73711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0818 20:09:08.975543   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.019794   73711 cri.go:89] found id: ""
	I0818 20:09:09.019859   73711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 20:09:09.030614   73711 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 20:09:09.030635   73711 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 20:09:09.030689   73711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 20:09:09.041513   73711 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 20:09:09.042532   73711 kubeconfig.go:125] found "no-preload-944426" server: "https://192.168.61.228:8443"
	I0818 20:09:09.044606   73711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 20:09:09.054823   73711 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.228
	I0818 20:09:09.054855   73711 kubeadm.go:1160] stopping kube-system containers ...
	I0818 20:09:09.054867   73711 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0818 20:09:09.054919   73711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 20:09:09.096324   73711 cri.go:89] found id: ""
	I0818 20:09:09.096412   73711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 20:09:09.112752   73711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:09:09.122515   73711 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:09:09.122537   73711 kubeadm.go:157] found existing configuration files:
	
	I0818 20:09:09.122578   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:09:09.131551   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:09:09.131604   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:09:09.140888   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:09:09.149865   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:09:09.149920   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:09:09.159008   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.168220   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:09:09.168279   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:09:09.177638   73711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:09:09.187508   73711 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:09:09.187567   73711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:09:09.196657   73711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:09:09.206117   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:09.331465   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:06.274787   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:06.774812   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.273986   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:07.774377   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.273933   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:08.774231   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.274070   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.774396   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.274898   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:10.773952   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:09.242594   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.738983   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:09.682305   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:11.683106   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:10.574796   73711 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.243293266s)
	I0818 20:09:10.574822   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.778850   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.843088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:10.931752   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:09:10.931846   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.432245   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.932577   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.948423   73711 api_server.go:72] duration metric: took 1.016687944s to wait for apiserver process to appear ...
	I0818 20:09:11.948449   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:09:11.948477   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:11.948946   73711 api_server.go:269] stopped: https://192.168.61.228:8443/healthz: Get "https://192.168.61.228:8443/healthz": dial tcp 192.168.61.228:8443: connect: connection refused
	I0818 20:09:12.448725   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.739963   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.739993   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.740010   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.750388   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 20:09:14.750411   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 20:09:14.948679   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:14.956174   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:14.956205   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:11.274322   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:11.774640   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.274152   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:12.774629   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.274045   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:13.774185   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.273967   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:14.774303   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.274472   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.774844   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:15.449273   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.453840   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.453870   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:15.949138   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:15.958790   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 20:09:15.958813   73711 api_server.go:103] status: https://192.168.61.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 20:09:16.449521   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:09:16.453975   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:09:16.460298   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:09:16.460323   73711 api_server.go:131] duration metric: took 4.511867816s to wait for apiserver health ...
	I0818 20:09:16.460330   73711 cni.go:84] Creating CNI manager for ""
	I0818 20:09:16.460339   73711 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:09:16.462141   73711 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:09:13.740020   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.238126   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:13.683910   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.182408   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:16.463457   73711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:09:16.474867   73711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:09:16.494479   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:09:16.502870   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:09:16.502898   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 20:09:16.502906   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 20:09:16.502917   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 20:09:16.502926   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 20:09:16.502937   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:09:16.502951   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 20:09:16.502959   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:09:16.502964   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:09:16.502970   73711 system_pods.go:74] duration metric: took 8.468743ms to wait for pod list to return data ...
	I0818 20:09:16.502977   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:09:16.507863   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:09:16.507884   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:09:16.507893   73711 node_conditions.go:105] duration metric: took 4.912203ms to run NodePressure ...
	I0818 20:09:16.507907   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 20:09:16.779765   73711 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790746   73711 kubeadm.go:739] kubelet initialised
	I0818 20:09:16.790771   73711 kubeadm.go:740] duration metric: took 10.982299ms waiting for restarted kubelet to initialise ...
	I0818 20:09:16.790780   73711 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:16.799544   73711 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.806805   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806826   73711 pod_ready.go:82] duration metric: took 7.251632ms for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.806835   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.806841   73711 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.813614   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813646   73711 pod_ready.go:82] duration metric: took 6.794013ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.813656   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "etcd-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.813664   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.818982   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819016   73711 pod_ready.go:82] duration metric: took 5.338981ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.819028   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-apiserver-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.819037   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:16.898401   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898433   73711 pod_ready.go:82] duration metric: took 79.37927ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:16.898446   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:16.898454   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.297663   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297697   73711 pod_ready.go:82] duration metric: took 399.23365ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.297706   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-proxy-2l6g8" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.297712   73711 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:17.697884   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697909   73711 pod_ready.go:82] duration metric: took 400.191092ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:17.697919   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "kube-scheduler-no-preload-944426" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:17.697925   73711 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:18.099008   73711 pod_ready.go:98] node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099034   73711 pod_ready.go:82] duration metric: took 401.09908ms for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:09:18.099044   73711 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-944426" hosting pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:18.099050   73711 pod_ready.go:39] duration metric: took 1.30825923s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:18.099071   73711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:09:18.111862   73711 ops.go:34] apiserver oom_adj: -16
	I0818 20:09:18.111888   73711 kubeadm.go:597] duration metric: took 9.081245207s to restartPrimaryControlPlane
	I0818 20:09:18.111901   73711 kubeadm.go:394] duration metric: took 9.136525478s to StartCluster
	I0818 20:09:18.111931   73711 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.112017   73711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:09:18.114460   73711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:09:18.114771   73711 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.228 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:09:18.114885   73711 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:09:18.114987   73711 config.go:182] Loaded profile config "no-preload-944426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:09:18.115022   73711 addons.go:69] Setting storage-provisioner=true in profile "no-preload-944426"
	I0818 20:09:18.115036   73711 addons.go:69] Setting default-storageclass=true in profile "no-preload-944426"
	I0818 20:09:18.115059   73711 addons.go:234] Setting addon storage-provisioner=true in "no-preload-944426"
	I0818 20:09:18.115075   73711 addons.go:69] Setting metrics-server=true in profile "no-preload-944426"
	W0818 20:09:18.115082   73711 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:09:18.115095   73711 addons.go:234] Setting addon metrics-server=true in "no-preload-944426"
	I0818 20:09:18.115067   73711 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-944426"
	W0818 20:09:18.115104   73711 addons.go:243] addon metrics-server should already be in state true
	I0818 20:09:18.115122   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115132   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.115517   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115530   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115541   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.115553   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115560   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.115592   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.117511   73711 out.go:177] * Verifying Kubernetes components...
	I0818 20:09:18.118740   73711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:09:18.133596   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33883
	I0818 20:09:18.134093   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.134661   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.134685   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.135066   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.135263   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.136138   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0818 20:09:18.136520   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.136981   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.137004   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.137353   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.137911   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.137957   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.138952   73711 addons.go:234] Setting addon default-storageclass=true in "no-preload-944426"
	W0818 20:09:18.138975   73711 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:09:18.139001   73711 host.go:66] Checking if "no-preload-944426" exists ...
	I0818 20:09:18.139356   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.139413   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.155618   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0818 20:09:18.156076   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.156666   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.156687   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.157086   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.157669   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.157700   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.158080   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38975
	I0818 20:09:18.158422   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.158850   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.158868   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.158888   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0818 20:09:18.159237   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.159282   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.159455   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.159741   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.159763   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.160108   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.160582   73711 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:09:18.160606   73711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:09:18.165108   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.166977   73711 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:09:18.168139   73711 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.168156   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:09:18.168174   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.171426   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172004   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.172041   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.172082   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.172238   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.172336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.172423   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.175961   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43047
	I0818 20:09:18.176421   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.176543   73711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0818 20:09:18.176861   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.176875   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.177065   73711 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:09:18.177176   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.177345   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.177745   73711 main.go:141] libmachine: Using API Version  1
	I0818 20:09:18.177762   73711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:09:18.178162   73711 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:09:18.178336   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetState
	I0818 20:09:18.179445   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180052   73711 main.go:141] libmachine: (no-preload-944426) Calling .DriverName
	I0818 20:09:18.180238   73711 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.180253   73711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:09:18.180275   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.181198   73711 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:09:18.182420   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:09:18.182447   73711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:09:18.182464   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHHostname
	I0818 20:09:18.183457   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183499   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.183513   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.183656   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.183820   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.183953   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.184112   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.185260   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185575   73711 main.go:141] libmachine: (no-preload-944426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:87:4a", ip: ""} in network mk-no-preload-944426: {Iface:virbr3 ExpiryTime:2024-08-18 21:08:42 +0000 UTC Type:0 Mac:52:54:00:51:87:4a Iaid: IPaddr:192.168.61.228 Prefix:24 Hostname:no-preload-944426 Clientid:01:52:54:00:51:87:4a}
	I0818 20:09:18.185588   73711 main.go:141] libmachine: (no-preload-944426) DBG | domain no-preload-944426 has defined IP address 192.168.61.228 and MAC address 52:54:00:51:87:4a in network mk-no-preload-944426
	I0818 20:09:18.185754   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHPort
	I0818 20:09:18.185879   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHKeyPath
	I0818 20:09:18.186013   73711 main.go:141] libmachine: (no-preload-944426) Calling .GetSSHUsername
	I0818 20:09:18.186099   73711 sshutil.go:53] new ssh client: &{IP:192.168.61.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/no-preload-944426/id_rsa Username:docker}
	I0818 20:09:18.338778   73711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:09:18.356229   73711 node_ready.go:35] waiting up to 6m0s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:18.496927   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:09:18.496949   73711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:09:18.513205   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:09:18.540482   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:09:18.540505   73711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:09:18.544078   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:09:18.613315   73711 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:18.613340   73711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:09:18.668416   73711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:09:19.638171   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094064475s)
	I0818 20:09:19.638274   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638299   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638177   73711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.124933278s)
	I0818 20:09:19.638328   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638343   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638281   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638412   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638697   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638714   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638724   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638732   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638825   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638845   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638853   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638857   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.638932   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638946   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.638966   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.638994   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639006   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.638893   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.639016   73711 addons.go:475] Verifying addon metrics-server=true in "no-preload-944426"
	I0818 20:09:19.639024   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.639227   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.639401   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.639416   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.640889   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.640905   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.640973   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647148   73711 main.go:141] libmachine: Making call to close driver server
	I0818 20:09:19.647169   73711 main.go:141] libmachine: (no-preload-944426) Calling .Close
	I0818 20:09:19.647416   73711 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:09:19.647460   73711 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:09:19.647448   73711 main.go:141] libmachine: (no-preload-944426) DBG | Closing plugin on server side
	I0818 20:09:19.649397   73711 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0818 20:09:19.650643   73711 addons.go:510] duration metric: took 1.535758897s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0818 20:09:16.274654   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:16.774176   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.273912   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:17.774245   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.274880   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:18.774709   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.274083   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:19.774819   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.274546   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:20.774382   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:20.774456   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:20.815406   74389 cri.go:89] found id: ""
	I0818 20:09:20.815431   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.815447   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:20.815453   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:20.815504   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:20.849445   74389 cri.go:89] found id: ""
	I0818 20:09:20.849468   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.849475   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:20.849481   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:20.849528   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:20.886018   74389 cri.go:89] found id: ""
	I0818 20:09:20.886043   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.886051   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:20.886056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:20.886106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:20.921730   74389 cri.go:89] found id: ""
	I0818 20:09:20.921757   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.921768   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:20.921775   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:20.921836   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:18.240003   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.738804   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:18.184836   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.682274   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:20.360319   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:22.860498   73711 node_ready.go:53] node "no-preload-944426" has status "Ready":"False"
	I0818 20:09:20.958574   74389 cri.go:89] found id: ""
	I0818 20:09:20.958601   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.958611   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:20.958618   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:20.958677   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:20.992830   74389 cri.go:89] found id: ""
	I0818 20:09:20.992858   74389 logs.go:276] 0 containers: []
	W0818 20:09:20.992867   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:20.992875   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:20.992939   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:21.028535   74389 cri.go:89] found id: ""
	I0818 20:09:21.028570   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.028581   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:21.028588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:21.028650   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:21.066319   74389 cri.go:89] found id: ""
	I0818 20:09:21.066359   74389 logs.go:276] 0 containers: []
	W0818 20:09:21.066370   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:21.066381   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:21.066395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:21.119521   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:21.119552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:21.133861   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:21.133883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:21.262343   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:21.262369   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:21.262391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:21.338724   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:21.338760   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:23.881431   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:23.894816   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:23.894885   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:23.928898   74389 cri.go:89] found id: ""
	I0818 20:09:23.928920   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.928929   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:23.928935   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:23.928984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:23.963157   74389 cri.go:89] found id: ""
	I0818 20:09:23.963182   74389 logs.go:276] 0 containers: []
	W0818 20:09:23.963190   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:23.963196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:23.963246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:24.001095   74389 cri.go:89] found id: ""
	I0818 20:09:24.001134   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.001146   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:24.001153   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:24.001221   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:24.038357   74389 cri.go:89] found id: ""
	I0818 20:09:24.038389   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.038400   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:24.038407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:24.038466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:24.074168   74389 cri.go:89] found id: ""
	I0818 20:09:24.074201   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.074209   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:24.074220   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:24.074282   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:24.106534   74389 cri.go:89] found id: ""
	I0818 20:09:24.106570   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.106578   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:24.106584   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:24.106636   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:24.144882   74389 cri.go:89] found id: ""
	I0818 20:09:24.144911   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.144922   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:24.144932   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:24.144990   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:24.185475   74389 cri.go:89] found id: ""
	I0818 20:09:24.185503   74389 logs.go:276] 0 containers: []
	W0818 20:09:24.185511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:24.185518   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:24.185534   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:24.200730   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:24.200759   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:24.278143   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:24.278165   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:24.278182   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:24.356739   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:24.356774   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:24.410433   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:24.410464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:22.739478   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.238989   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.239357   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:23.181992   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.182417   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:27.183071   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:25.360413   73711 node_ready.go:49] node "no-preload-944426" has status "Ready":"True"
	I0818 20:09:25.360449   73711 node_ready.go:38] duration metric: took 7.004187421s for node "no-preload-944426" to be "Ready" ...
	I0818 20:09:25.360462   73711 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:09:25.366498   73711 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:27.373766   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.873098   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:26.962996   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:26.977544   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:26.977603   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:27.013433   74389 cri.go:89] found id: ""
	I0818 20:09:27.013462   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.013473   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:27.013480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:27.013544   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:27.049106   74389 cri.go:89] found id: ""
	I0818 20:09:27.049130   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.049139   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:27.049149   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:27.049197   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:27.083559   74389 cri.go:89] found id: ""
	I0818 20:09:27.083584   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.083595   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:27.083601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:27.083659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:27.120499   74389 cri.go:89] found id: ""
	I0818 20:09:27.120527   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.120537   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:27.120545   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:27.120605   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:27.155291   74389 cri.go:89] found id: ""
	I0818 20:09:27.155315   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.155323   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:27.155329   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:27.155375   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:27.197840   74389 cri.go:89] found id: ""
	I0818 20:09:27.197879   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.197899   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:27.197907   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:27.197969   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:27.232244   74389 cri.go:89] found id: ""
	I0818 20:09:27.232271   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.232280   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:27.232288   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:27.232349   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:27.267349   74389 cri.go:89] found id: ""
	I0818 20:09:27.267404   74389 logs.go:276] 0 containers: []
	W0818 20:09:27.267416   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:27.267427   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:27.267447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:27.311126   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:27.311154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:27.362799   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:27.362833   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:27.376663   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:27.376684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:27.456426   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:27.456449   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:27.456464   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.039534   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:30.052863   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:30.052935   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:30.095709   74389 cri.go:89] found id: ""
	I0818 20:09:30.095733   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.095741   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:30.095748   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:30.095805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:30.150394   74389 cri.go:89] found id: ""
	I0818 20:09:30.150417   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.150424   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:30.150429   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:30.150487   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:30.190275   74389 cri.go:89] found id: ""
	I0818 20:09:30.190300   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.190308   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:30.190317   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:30.190374   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:30.229748   74389 cri.go:89] found id: ""
	I0818 20:09:30.229779   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.229790   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:30.229797   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:30.229860   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:30.274024   74389 cri.go:89] found id: ""
	I0818 20:09:30.274068   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.274076   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:30.274081   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:30.274142   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:30.313775   74389 cri.go:89] found id: ""
	I0818 20:09:30.313799   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.313807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:30.313813   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:30.313868   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:30.353728   74389 cri.go:89] found id: ""
	I0818 20:09:30.353753   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.353761   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:30.353767   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:30.353821   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:30.391319   74389 cri.go:89] found id: ""
	I0818 20:09:30.391341   74389 logs.go:276] 0 containers: []
	W0818 20:09:30.391347   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:30.391356   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:30.391367   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:30.472354   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:30.472389   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:30.515318   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:30.515360   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:30.565596   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:30.565629   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:30.579550   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:30.579575   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:30.649278   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:29.738977   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.238945   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:29.683136   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.182825   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:31.873262   73711 pod_ready.go:103] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:32.372828   73711 pod_ready.go:93] pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.372849   73711 pod_ready.go:82] duration metric: took 7.006326702s for pod "coredns-6f6b679f8f-vqsgw" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.372858   73711 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376709   73711 pod_ready.go:93] pod "etcd-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.376732   73711 pod_ready.go:82] duration metric: took 3.867173ms for pod "etcd-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.376743   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380703   73711 pod_ready.go:93] pod "kube-apiserver-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.380722   73711 pod_ready.go:82] duration metric: took 3.970732ms for pod "kube-apiserver-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.380733   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385137   73711 pod_ready.go:93] pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.385159   73711 pod_ready.go:82] duration metric: took 4.417483ms for pod "kube-controller-manager-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.385171   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390646   73711 pod_ready.go:93] pod "kube-proxy-2l6g8" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.390702   73711 pod_ready.go:82] duration metric: took 5.522399ms for pod "kube-proxy-2l6g8" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.390713   73711 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772352   73711 pod_ready.go:93] pod "kube-scheduler-no-preload-944426" in "kube-system" namespace has status "Ready":"True"
	I0818 20:09:32.772374   73711 pod_ready.go:82] duration metric: took 381.654122ms for pod "kube-scheduler-no-preload-944426" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:32.772384   73711 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	I0818 20:09:34.779615   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:33.150069   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:33.164197   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:33.164261   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:33.204591   74389 cri.go:89] found id: ""
	I0818 20:09:33.204615   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.204627   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:33.204632   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:33.204693   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:33.242352   74389 cri.go:89] found id: ""
	I0818 20:09:33.242376   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.242387   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:33.242394   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:33.242458   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:33.280219   74389 cri.go:89] found id: ""
	I0818 20:09:33.280242   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.280251   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:33.280258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:33.280317   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:33.320879   74389 cri.go:89] found id: ""
	I0818 20:09:33.320919   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.320931   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:33.320939   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:33.321001   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:33.356049   74389 cri.go:89] found id: ""
	I0818 20:09:33.356074   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.356082   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:33.356088   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:33.356137   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:33.394116   74389 cri.go:89] found id: ""
	I0818 20:09:33.394144   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.394156   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:33.394164   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:33.394238   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:33.433686   74389 cri.go:89] found id: ""
	I0818 20:09:33.433712   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.433723   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:33.433728   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:33.433773   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:33.468502   74389 cri.go:89] found id: ""
	I0818 20:09:33.468529   74389 logs.go:276] 0 containers: []
	W0818 20:09:33.468541   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:33.468551   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:33.468570   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:33.556312   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:33.556349   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:33.595547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:33.595621   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:33.648719   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:33.648753   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:33.663770   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:33.663803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:33.746833   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:34.239095   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.738310   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:34.683291   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:37.181676   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.780369   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.278364   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:36.247309   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:36.261267   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:36.261338   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:36.297798   74389 cri.go:89] found id: ""
	I0818 20:09:36.297825   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.297835   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:36.297844   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:36.297901   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:36.332346   74389 cri.go:89] found id: ""
	I0818 20:09:36.332371   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.332381   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:36.332389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:36.332449   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:36.370463   74389 cri.go:89] found id: ""
	I0818 20:09:36.370488   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.370498   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:36.370505   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:36.370563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:36.409671   74389 cri.go:89] found id: ""
	I0818 20:09:36.409696   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.409705   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:36.409712   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:36.409770   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:36.448358   74389 cri.go:89] found id: ""
	I0818 20:09:36.448387   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.448398   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:36.448405   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:36.448466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:36.498430   74389 cri.go:89] found id: ""
	I0818 20:09:36.498457   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.498464   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:36.498471   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:36.498517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:36.564417   74389 cri.go:89] found id: ""
	I0818 20:09:36.564448   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.564456   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:36.564462   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:36.564517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:36.614736   74389 cri.go:89] found id: ""
	I0818 20:09:36.614760   74389 logs.go:276] 0 containers: []
	W0818 20:09:36.614778   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:36.614789   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:36.614803   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:36.668664   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:36.668691   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:36.682185   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:36.682211   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:36.754186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:36.754214   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:36.754255   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:36.842173   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:36.842206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:39.381749   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:39.395710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:39.395767   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:39.434359   74389 cri.go:89] found id: ""
	I0818 20:09:39.434381   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.434388   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:39.434394   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:39.434450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:39.473353   74389 cri.go:89] found id: ""
	I0818 20:09:39.473375   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.473384   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:39.473389   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:39.473438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:39.510536   74389 cri.go:89] found id: ""
	I0818 20:09:39.510563   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.510572   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:39.510578   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:39.510632   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:39.549287   74389 cri.go:89] found id: ""
	I0818 20:09:39.549315   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.549325   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:39.549333   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:39.549394   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:39.587014   74389 cri.go:89] found id: ""
	I0818 20:09:39.587056   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.587093   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:39.587100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:39.587150   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:39.624795   74389 cri.go:89] found id: ""
	I0818 20:09:39.624826   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.624837   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:39.624844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:39.624900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:39.658404   74389 cri.go:89] found id: ""
	I0818 20:09:39.658446   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.658457   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:39.658464   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:39.658516   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:39.695092   74389 cri.go:89] found id: ""
	I0818 20:09:39.695117   74389 logs.go:276] 0 containers: []
	W0818 20:09:39.695125   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:39.695134   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:39.695147   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:39.752753   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:39.752795   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:39.766817   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:39.766846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:39.844360   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:39.844389   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:39.844406   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:39.923938   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:39.923971   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:38.740139   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.238400   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:39.181867   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.182275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:41.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.781697   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:42.465852   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:42.481657   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:42.481730   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:42.525679   74389 cri.go:89] found id: ""
	I0818 20:09:42.525709   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.525716   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:42.525723   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:42.525789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:42.590279   74389 cri.go:89] found id: ""
	I0818 20:09:42.590307   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.590315   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:42.590323   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:42.590407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:42.624013   74389 cri.go:89] found id: ""
	I0818 20:09:42.624045   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.624054   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:42.624062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:42.624122   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:42.659500   74389 cri.go:89] found id: ""
	I0818 20:09:42.659524   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.659531   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:42.659537   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:42.659587   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:42.694899   74389 cri.go:89] found id: ""
	I0818 20:09:42.694921   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.694928   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:42.694933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:42.694983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:42.729768   74389 cri.go:89] found id: ""
	I0818 20:09:42.729797   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.729805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:42.729811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:42.729873   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:42.766922   74389 cri.go:89] found id: ""
	I0818 20:09:42.766949   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.766960   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:42.766967   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:42.767027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:42.801967   74389 cri.go:89] found id: ""
	I0818 20:09:42.801995   74389 logs.go:276] 0 containers: []
	W0818 20:09:42.802006   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:42.802016   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:42.802032   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:42.879205   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:42.879234   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:42.920591   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:42.920628   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:42.974326   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:42.974362   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:42.989067   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:42.989102   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:43.065929   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:45.566918   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:45.582223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:45.582298   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:45.616194   74389 cri.go:89] found id: ""
	I0818 20:09:45.616219   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.616227   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:45.616233   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:45.616287   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:45.649714   74389 cri.go:89] found id: ""
	I0818 20:09:45.649736   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.649743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:45.649748   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:45.649805   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:45.684553   74389 cri.go:89] found id: ""
	I0818 20:09:45.684572   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.684582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:45.684588   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:45.684648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:45.721715   74389 cri.go:89] found id: ""
	I0818 20:09:45.721742   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.721753   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:45.721760   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:45.721822   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:45.757903   74389 cri.go:89] found id: ""
	I0818 20:09:45.757933   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.757944   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:45.757952   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:45.758016   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:45.794649   74389 cri.go:89] found id: ""
	I0818 20:09:45.794683   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.794694   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:45.794702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:45.794765   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:45.835340   74389 cri.go:89] found id: ""
	I0818 20:09:45.835362   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.835370   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:45.835375   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:45.835447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:45.870307   74389 cri.go:89] found id: ""
	I0818 20:09:45.870335   74389 logs.go:276] 0 containers: []
	W0818 20:09:45.870344   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:45.870352   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:45.870365   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:45.926565   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:45.926695   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:43.239274   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.739280   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:43.182744   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.684210   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:46.278261   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.279139   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:45.940126   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:45.940156   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:46.009606   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:46.009627   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:46.009643   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:46.092327   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:46.092358   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:48.632286   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:48.646613   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:48.646675   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:48.681060   74389 cri.go:89] found id: ""
	I0818 20:09:48.681111   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.681122   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:48.681130   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:48.681194   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:48.714884   74389 cri.go:89] found id: ""
	I0818 20:09:48.714908   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.714916   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:48.714921   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:48.714971   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:48.752032   74389 cri.go:89] found id: ""
	I0818 20:09:48.752117   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.752132   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:48.752139   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:48.752201   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:48.793013   74389 cri.go:89] found id: ""
	I0818 20:09:48.793038   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.793049   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:48.793056   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:48.793114   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:48.827476   74389 cri.go:89] found id: ""
	I0818 20:09:48.827499   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.827509   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:48.827516   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:48.827576   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:48.862071   74389 cri.go:89] found id: ""
	I0818 20:09:48.862097   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.862108   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:48.862115   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:48.862180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:48.900541   74389 cri.go:89] found id: ""
	I0818 20:09:48.900568   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.900576   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:48.900581   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:48.900629   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:48.934678   74389 cri.go:89] found id: ""
	I0818 20:09:48.934704   74389 logs.go:276] 0 containers: []
	W0818 20:09:48.934712   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:48.934720   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:48.934732   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:49.023307   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:49.023350   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:49.061607   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:49.061633   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:49.113126   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:49.113157   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:49.128202   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:49.128242   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:49.204205   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:47.739502   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.239148   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:48.181581   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.181939   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.182295   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:50.779145   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:52.779195   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.779440   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:51.704335   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:51.717424   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:51.717515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:51.754325   74389 cri.go:89] found id: ""
	I0818 20:09:51.754350   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.754362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:51.754370   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:51.754428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:51.792496   74389 cri.go:89] found id: ""
	I0818 20:09:51.792518   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.792529   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:51.792536   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:51.792594   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:51.830307   74389 cri.go:89] found id: ""
	I0818 20:09:51.830332   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.830340   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:51.830346   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:51.830398   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:51.868298   74389 cri.go:89] found id: ""
	I0818 20:09:51.868330   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.868343   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:51.868351   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:51.868419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:51.906077   74389 cri.go:89] found id: ""
	I0818 20:09:51.906108   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.906120   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:51.906126   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:51.906179   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:51.939922   74389 cri.go:89] found id: ""
	I0818 20:09:51.939945   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.939955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:51.939963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:51.940024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:51.974045   74389 cri.go:89] found id: ""
	I0818 20:09:51.974070   74389 logs.go:276] 0 containers: []
	W0818 20:09:51.974078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:51.974083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:51.974135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:52.010667   74389 cri.go:89] found id: ""
	I0818 20:09:52.010693   74389 logs.go:276] 0 containers: []
	W0818 20:09:52.010700   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:52.010709   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:52.010719   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:52.058709   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:52.058742   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:52.073252   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:52.073276   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:52.142466   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:52.142491   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:52.142507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:52.219766   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:52.219801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:54.759543   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:54.773167   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:54.773248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:54.808795   74389 cri.go:89] found id: ""
	I0818 20:09:54.808822   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.808833   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:54.808841   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:54.808910   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:54.843282   74389 cri.go:89] found id: ""
	I0818 20:09:54.843304   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.843313   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:54.843318   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:54.843397   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:54.879109   74389 cri.go:89] found id: ""
	I0818 20:09:54.879136   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.879147   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:54.879154   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:54.879216   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:54.914762   74389 cri.go:89] found id: ""
	I0818 20:09:54.914789   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.914798   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:54.914806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:54.914864   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:54.950650   74389 cri.go:89] found id: ""
	I0818 20:09:54.950676   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.950692   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:54.950699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:54.950757   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:54.985001   74389 cri.go:89] found id: ""
	I0818 20:09:54.985029   74389 logs.go:276] 0 containers: []
	W0818 20:09:54.985040   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:54.985047   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:54.985106   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:55.019973   74389 cri.go:89] found id: ""
	I0818 20:09:55.020002   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.020010   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:55.020016   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:55.020074   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:55.058240   74389 cri.go:89] found id: ""
	I0818 20:09:55.058269   74389 logs.go:276] 0 containers: []
	W0818 20:09:55.058278   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:55.058286   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:55.058297   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:55.109984   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:55.110019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:55.126098   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:55.126128   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:55.210618   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:55.210637   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:55.210649   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:55.293124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:55.293165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:09:52.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:55.239445   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:54.682549   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.182480   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.278685   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.279456   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:57.841891   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:09:57.854601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:09:57.854657   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.890373   74389 cri.go:89] found id: ""
	I0818 20:09:57.890401   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.890412   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:09:57.890419   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:09:57.890478   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:09:57.931150   74389 cri.go:89] found id: ""
	I0818 20:09:57.931173   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.931181   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:09:57.931186   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:09:57.931237   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:09:57.968816   74389 cri.go:89] found id: ""
	I0818 20:09:57.968838   74389 logs.go:276] 0 containers: []
	W0818 20:09:57.968846   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:09:57.968854   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:09:57.968915   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:09:58.005762   74389 cri.go:89] found id: ""
	I0818 20:09:58.005785   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.005795   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:09:58.005802   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:09:58.005858   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:09:58.043973   74389 cri.go:89] found id: ""
	I0818 20:09:58.043995   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.044005   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:09:58.044013   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:09:58.044072   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:09:58.081921   74389 cri.go:89] found id: ""
	I0818 20:09:58.081948   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.081959   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:09:58.081966   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:09:58.082039   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:09:58.118247   74389 cri.go:89] found id: ""
	I0818 20:09:58.118274   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.118285   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:09:58.118292   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:09:58.118354   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:09:58.155358   74389 cri.go:89] found id: ""
	I0818 20:09:58.155397   74389 logs.go:276] 0 containers: []
	W0818 20:09:58.155408   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:09:58.155420   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:09:58.155433   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:09:58.208230   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:09:58.208262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:09:58.221745   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:09:58.221775   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:09:58.291605   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:09:58.291630   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:09:58.291646   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:09:58.373701   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:09:58.373736   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:00.916278   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:00.929758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:00.929828   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:09:57.739205   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.739780   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:02.240023   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:09:59.182638   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.182974   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:01.778759   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:04.279122   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:00.966104   74389 cri.go:89] found id: ""
	I0818 20:10:00.966133   74389 logs.go:276] 0 containers: []
	W0818 20:10:00.966147   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:00.966153   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:00.966202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:01.006244   74389 cri.go:89] found id: ""
	I0818 20:10:01.006272   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.006284   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:01.006291   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:01.006366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:01.052078   74389 cri.go:89] found id: ""
	I0818 20:10:01.052099   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.052107   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:01.052112   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:01.052166   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:01.091242   74389 cri.go:89] found id: ""
	I0818 20:10:01.091285   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.091296   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:01.091303   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:01.091365   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:01.128273   74389 cri.go:89] found id: ""
	I0818 20:10:01.128298   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.128309   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:01.128319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:01.128381   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:01.162933   74389 cri.go:89] found id: ""
	I0818 20:10:01.162958   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.162968   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:01.162976   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:01.163034   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:01.199512   74389 cri.go:89] found id: ""
	I0818 20:10:01.199538   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.199546   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:01.199551   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:01.199597   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:01.235268   74389 cri.go:89] found id: ""
	I0818 20:10:01.235293   74389 logs.go:276] 0 containers: []
	W0818 20:10:01.235304   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:01.235314   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:01.235328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:01.279798   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:01.279846   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:01.333554   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:01.333599   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:01.348231   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:01.348262   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:01.427375   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:01.427421   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:01.427437   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.012982   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:04.026625   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:04.026709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:04.062594   74389 cri.go:89] found id: ""
	I0818 20:10:04.062627   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.062638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:04.062649   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:04.062712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:04.098705   74389 cri.go:89] found id: ""
	I0818 20:10:04.098732   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.098743   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:04.098750   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:04.098816   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:04.139222   74389 cri.go:89] found id: ""
	I0818 20:10:04.139245   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.139254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:04.139262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:04.139320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:04.175155   74389 cri.go:89] found id: ""
	I0818 20:10:04.175181   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.175189   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:04.175196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:04.175249   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:04.212060   74389 cri.go:89] found id: ""
	I0818 20:10:04.212086   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.212094   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:04.212100   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:04.212157   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:04.252602   74389 cri.go:89] found id: ""
	I0818 20:10:04.252631   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.252641   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:04.252649   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:04.252708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:04.290662   74389 cri.go:89] found id: ""
	I0818 20:10:04.290692   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.290703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:04.290710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:04.290763   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:04.334199   74389 cri.go:89] found id: ""
	I0818 20:10:04.334227   74389 logs.go:276] 0 containers: []
	W0818 20:10:04.334238   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:04.334250   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:04.334265   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:04.377452   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:04.377487   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:04.432431   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:04.432467   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:04.446716   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:04.446743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:04.512818   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:04.512844   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:04.512857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:04.240223   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.738829   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:03.183498   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:05.681527   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.682456   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:06.281289   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:08.778838   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:07.089353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:07.102715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:07.102775   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:07.139129   74389 cri.go:89] found id: ""
	I0818 20:10:07.139159   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.139167   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:07.139173   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:07.139223   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:07.177152   74389 cri.go:89] found id: ""
	I0818 20:10:07.177178   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.177188   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:07.177196   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:07.177254   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:07.215940   74389 cri.go:89] found id: ""
	I0818 20:10:07.215966   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.215974   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:07.215979   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:07.216027   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:07.251671   74389 cri.go:89] found id: ""
	I0818 20:10:07.251699   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.251716   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:07.251724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:07.251771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:07.293808   74389 cri.go:89] found id: ""
	I0818 20:10:07.293844   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.293855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:07.293862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:07.293934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:07.328675   74389 cri.go:89] found id: ""
	I0818 20:10:07.328706   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.328716   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:07.328724   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:07.328789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:07.365394   74389 cri.go:89] found id: ""
	I0818 20:10:07.365419   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.365426   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:07.365432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:07.365501   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:07.401254   74389 cri.go:89] found id: ""
	I0818 20:10:07.401279   74389 logs.go:276] 0 containers: []
	W0818 20:10:07.401290   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:07.401301   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:07.401316   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:07.471676   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:07.471696   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:07.471709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:07.548676   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:07.548718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:07.588404   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:07.588438   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:07.640529   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:07.640565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.158668   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:10.173853   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:10.173950   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:10.212129   74389 cri.go:89] found id: ""
	I0818 20:10:10.212161   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.212172   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:10.212179   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:10.212244   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:10.254637   74389 cri.go:89] found id: ""
	I0818 20:10:10.254661   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.254669   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:10.254674   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:10.254727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:10.289661   74389 cri.go:89] found id: ""
	I0818 20:10:10.289693   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.289703   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:10.289710   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:10.289771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:10.325586   74389 cri.go:89] found id: ""
	I0818 20:10:10.325614   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.325621   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:10.325627   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:10.325684   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:10.363345   74389 cri.go:89] found id: ""
	I0818 20:10:10.363373   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.363407   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:10.363415   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:10.363477   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:10.402162   74389 cri.go:89] found id: ""
	I0818 20:10:10.402185   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.402193   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:10.402199   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:10.402257   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:10.439096   74389 cri.go:89] found id: ""
	I0818 20:10:10.439125   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.439136   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:10.439144   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:10.439211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:10.473735   74389 cri.go:89] found id: ""
	I0818 20:10:10.473760   74389 logs.go:276] 0 containers: []
	W0818 20:10:10.473767   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:10.473775   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:10.473788   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:10.525170   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:10.525212   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:10.539801   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:10.539827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:10.626241   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:10.626259   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:10.626273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:10.701172   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:10.701205   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:09.238297   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:11.240258   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.182214   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:12.182485   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:10.778909   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.279849   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:13.241319   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:13.256372   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:13.256446   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:13.295570   74389 cri.go:89] found id: ""
	I0818 20:10:13.295596   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.295604   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:13.295609   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:13.295666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:13.332573   74389 cri.go:89] found id: ""
	I0818 20:10:13.332599   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.332610   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:13.332617   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:13.332669   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:13.369132   74389 cri.go:89] found id: ""
	I0818 20:10:13.369161   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.369172   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:13.369179   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:13.369239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:13.407548   74389 cri.go:89] found id: ""
	I0818 20:10:13.407574   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.407591   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:13.407599   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:13.407658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:13.441443   74389 cri.go:89] found id: ""
	I0818 20:10:13.441469   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.441479   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:13.441485   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:13.441551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:13.474097   74389 cri.go:89] found id: ""
	I0818 20:10:13.474124   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.474140   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:13.474148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:13.474211   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:13.507887   74389 cri.go:89] found id: ""
	I0818 20:10:13.507910   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.507918   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:13.507924   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:13.507984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:13.546502   74389 cri.go:89] found id: ""
	I0818 20:10:13.546530   74389 logs.go:276] 0 containers: []
	W0818 20:10:13.546538   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:13.546546   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:13.546561   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:13.560297   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:13.560319   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:13.628526   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:13.628548   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:13.628560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:13.712275   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:13.712310   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:13.757608   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:13.757641   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:13.739554   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.240247   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:14.182841   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.682427   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:15.778555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:17.779315   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:16.316052   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:16.330643   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:16.330704   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:16.375316   74389 cri.go:89] found id: ""
	I0818 20:10:16.375345   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.375355   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:16.375361   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:16.375453   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:16.420986   74389 cri.go:89] found id: ""
	I0818 20:10:16.421013   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.421025   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:16.421032   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:16.421108   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:16.459484   74389 cri.go:89] found id: ""
	I0818 20:10:16.459511   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.459523   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:16.459529   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:16.459582   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:16.497634   74389 cri.go:89] found id: ""
	I0818 20:10:16.497661   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.497669   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:16.497674   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:16.497727   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:16.532854   74389 cri.go:89] found id: ""
	I0818 20:10:16.532884   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.532895   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:16.532903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:16.532963   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:16.569638   74389 cri.go:89] found id: ""
	I0818 20:10:16.569660   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.569666   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:16.569673   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:16.569729   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:16.608362   74389 cri.go:89] found id: ""
	I0818 20:10:16.608396   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.608404   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:16.608410   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:16.608470   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:16.648595   74389 cri.go:89] found id: ""
	I0818 20:10:16.648620   74389 logs.go:276] 0 containers: []
	W0818 20:10:16.648627   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:16.648636   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:16.648647   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:16.731360   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:16.731404   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:16.772292   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:16.772325   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:16.825603   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:16.825644   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:16.839720   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:16.839743   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:16.911348   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.412195   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:19.426106   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:19.426181   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:19.462260   74389 cri.go:89] found id: ""
	I0818 20:10:19.462288   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.462297   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:19.462302   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:19.462358   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:19.499486   74389 cri.go:89] found id: ""
	I0818 20:10:19.499512   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.499520   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:19.499525   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:19.499571   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:19.534046   74389 cri.go:89] found id: ""
	I0818 20:10:19.534073   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.534090   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:19.534097   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:19.534153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:19.570438   74389 cri.go:89] found id: ""
	I0818 20:10:19.570468   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.570507   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:19.570515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:19.570579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:19.604690   74389 cri.go:89] found id: ""
	I0818 20:10:19.604712   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.604721   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:19.604729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:19.604789   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:19.641464   74389 cri.go:89] found id: ""
	I0818 20:10:19.641492   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.641504   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:19.641512   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:19.641573   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:19.679312   74389 cri.go:89] found id: ""
	I0818 20:10:19.679343   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.679354   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:19.679362   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:19.679442   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:19.717375   74389 cri.go:89] found id: ""
	I0818 20:10:19.717399   74389 logs.go:276] 0 containers: []
	W0818 20:10:19.717407   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:19.717415   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:19.717429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:19.761482   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:19.761506   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:19.813581   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:19.813614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:19.827992   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:19.828019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:19.898439   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:19.898465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:19.898477   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:18.739993   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.241320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:19.182059   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:21.681310   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:20.278905   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.779594   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:22.480565   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:22.493848   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:22.493931   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:22.536172   74389 cri.go:89] found id: ""
	I0818 20:10:22.536198   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.536206   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:22.536212   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:22.536271   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:22.574361   74389 cri.go:89] found id: ""
	I0818 20:10:22.574386   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.574393   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:22.574400   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:22.574450   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:22.609385   74389 cri.go:89] found id: ""
	I0818 20:10:22.609414   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.609422   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:22.609427   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:22.609476   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:22.645474   74389 cri.go:89] found id: ""
	I0818 20:10:22.645497   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.645508   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:22.645515   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:22.645575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:22.686160   74389 cri.go:89] found id: ""
	I0818 20:10:22.686185   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.686193   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:22.686198   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:22.686243   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:22.722597   74389 cri.go:89] found id: ""
	I0818 20:10:22.722623   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.722631   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:22.722637   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:22.722686   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:22.776684   74389 cri.go:89] found id: ""
	I0818 20:10:22.776708   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.776718   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:22.776725   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:22.776783   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:22.824089   74389 cri.go:89] found id: ""
	I0818 20:10:22.824114   74389 logs.go:276] 0 containers: []
	W0818 20:10:22.824122   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:22.824140   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:22.824153   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:22.878281   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:22.878321   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:22.894932   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:22.894962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:22.961750   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:22.961769   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:22.961783   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:23.048341   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:23.048391   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:25.595227   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:25.608347   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:25.608405   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:25.644636   74389 cri.go:89] found id: ""
	I0818 20:10:25.644666   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.644673   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:25.644679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:25.644739   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:25.681564   74389 cri.go:89] found id: ""
	I0818 20:10:25.681592   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.681602   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:25.681610   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:25.681666   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:25.717107   74389 cri.go:89] found id: ""
	I0818 20:10:25.717136   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.717143   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:25.717149   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:25.717206   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:25.752155   74389 cri.go:89] found id: ""
	I0818 20:10:25.752185   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.752197   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:25.752205   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:25.752281   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:25.789485   74389 cri.go:89] found id: ""
	I0818 20:10:25.789509   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.789522   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:25.789527   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:25.789581   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:25.831164   74389 cri.go:89] found id: ""
	I0818 20:10:25.831191   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.831201   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:25.831208   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:25.831267   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:25.870046   74389 cri.go:89] found id: ""
	I0818 20:10:25.870069   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.870078   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:25.870083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:25.870138   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:25.906752   74389 cri.go:89] found id: ""
	I0818 20:10:25.906775   74389 logs.go:276] 0 containers: []
	W0818 20:10:25.906783   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:25.906790   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:25.906801   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:23.739354   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.739406   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:23.682161   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.683137   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.279240   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:27.778736   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:25.958731   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:25.958761   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:25.972223   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:25.972249   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:26.051895   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:26.051923   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:26.051939   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:26.136065   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:26.136098   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.677374   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:28.694626   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:28.694709   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:28.741471   74389 cri.go:89] found id: ""
	I0818 20:10:28.741497   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.741507   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:28.741514   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:28.741575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:28.795647   74389 cri.go:89] found id: ""
	I0818 20:10:28.795675   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.795686   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:28.795693   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:28.795760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:28.841877   74389 cri.go:89] found id: ""
	I0818 20:10:28.841899   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.841907   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:28.841914   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:28.841960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:28.877098   74389 cri.go:89] found id: ""
	I0818 20:10:28.877234   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.877256   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:28.877263   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:28.877320   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:28.912278   74389 cri.go:89] found id: ""
	I0818 20:10:28.912303   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.912313   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:28.912321   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:28.912378   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:28.949730   74389 cri.go:89] found id: ""
	I0818 20:10:28.949758   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.949766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:28.949772   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:28.949819   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:28.987272   74389 cri.go:89] found id: ""
	I0818 20:10:28.987301   74389 logs.go:276] 0 containers: []
	W0818 20:10:28.987309   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:28.987315   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:28.987368   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:29.028334   74389 cri.go:89] found id: ""
	I0818 20:10:29.028368   74389 logs.go:276] 0 containers: []
	W0818 20:10:29.028376   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:29.028385   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:29.028395   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:29.081620   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:29.081654   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:29.095579   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:29.095604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:29.166581   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:29.166607   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:29.166622   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:29.246746   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:29.246779   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:28.238417   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.240302   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:28.182371   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.182431   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.182538   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:30.277705   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:32.279039   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.778467   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:31.792831   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:31.806150   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:31.806229   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:31.842943   74389 cri.go:89] found id: ""
	I0818 20:10:31.842976   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.842987   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:31.842995   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:31.843057   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:31.876865   74389 cri.go:89] found id: ""
	I0818 20:10:31.876892   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.876902   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:31.876909   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:31.876970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:31.912925   74389 cri.go:89] found id: ""
	I0818 20:10:31.912954   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.912964   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:31.912983   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:31.913063   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:31.947827   74389 cri.go:89] found id: ""
	I0818 20:10:31.947852   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.947860   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:31.947866   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:31.947914   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:31.982499   74389 cri.go:89] found id: ""
	I0818 20:10:31.982527   74389 logs.go:276] 0 containers: []
	W0818 20:10:31.982534   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:31.982540   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:31.982591   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:32.017890   74389 cri.go:89] found id: ""
	I0818 20:10:32.017923   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.017934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:32.017942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:32.017998   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:32.053277   74389 cri.go:89] found id: ""
	I0818 20:10:32.053305   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.053317   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:32.053324   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:32.053384   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:32.088459   74389 cri.go:89] found id: ""
	I0818 20:10:32.088487   74389 logs.go:276] 0 containers: []
	W0818 20:10:32.088495   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:32.088504   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:32.088515   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:32.138302   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:32.138335   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:32.152011   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:32.152037   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:32.224820   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.224839   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:32.224857   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:32.304491   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:32.304527   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:34.844961   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:34.857807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:34.857886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:34.893600   74389 cri.go:89] found id: ""
	I0818 20:10:34.893627   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.893638   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:34.893645   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:34.893708   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:34.928747   74389 cri.go:89] found id: ""
	I0818 20:10:34.928771   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.928778   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:34.928784   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:34.928829   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:34.966886   74389 cri.go:89] found id: ""
	I0818 20:10:34.966912   74389 logs.go:276] 0 containers: []
	W0818 20:10:34.966920   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:34.966925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:34.966987   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:35.004760   74389 cri.go:89] found id: ""
	I0818 20:10:35.004786   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.004794   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:35.004800   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:35.004848   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:35.039235   74389 cri.go:89] found id: ""
	I0818 20:10:35.039257   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.039265   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:35.039270   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:35.039318   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:35.078344   74389 cri.go:89] found id: ""
	I0818 20:10:35.078372   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.078380   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:35.078387   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:35.078447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:35.111939   74389 cri.go:89] found id: ""
	I0818 20:10:35.111962   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.111970   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:35.111975   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:35.112028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:35.145763   74389 cri.go:89] found id: ""
	I0818 20:10:35.145795   74389 logs.go:276] 0 containers: []
	W0818 20:10:35.145806   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:35.145815   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:35.145827   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:35.224812   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:35.224847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:35.265363   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:35.265397   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:35.320030   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:35.320062   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:35.335536   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:35.335568   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:35.408283   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:32.739086   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:35.239575   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:34.682089   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:36.682424   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.277613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:39.778047   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:37.908569   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:37.921954   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:37.922023   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:37.957319   74389 cri.go:89] found id: ""
	I0818 20:10:37.957347   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.957359   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:37.957366   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:37.957426   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:37.991370   74389 cri.go:89] found id: ""
	I0818 20:10:37.991410   74389 logs.go:276] 0 containers: []
	W0818 20:10:37.991421   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:37.991428   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:37.991488   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:38.033209   74389 cri.go:89] found id: ""
	I0818 20:10:38.033235   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.033243   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:38.033250   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:38.033307   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:38.072194   74389 cri.go:89] found id: ""
	I0818 20:10:38.072222   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.072230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:38.072237   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:38.072299   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:38.109711   74389 cri.go:89] found id: ""
	I0818 20:10:38.109735   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.109743   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:38.109748   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:38.109810   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:38.141374   74389 cri.go:89] found id: ""
	I0818 20:10:38.141397   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.141405   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:38.141411   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:38.141460   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:38.176025   74389 cri.go:89] found id: ""
	I0818 20:10:38.176052   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.176064   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:38.176071   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:38.176126   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:38.214720   74389 cri.go:89] found id: ""
	I0818 20:10:38.214749   74389 logs.go:276] 0 containers: []
	W0818 20:10:38.214760   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:38.214770   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:38.214790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:38.268377   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:38.268410   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:38.284220   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:38.284244   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:38.352517   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:38.352540   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:38.352552   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:38.435208   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:38.435240   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:37.743430   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.240404   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:38.682667   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.182697   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:41.779091   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.780368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:40.975594   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:40.989806   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:40.989871   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:41.024063   74389 cri.go:89] found id: ""
	I0818 20:10:41.024087   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.024095   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:41.024101   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:41.024154   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:41.062786   74389 cri.go:89] found id: ""
	I0818 20:10:41.062808   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.062815   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:41.062820   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:41.062869   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:41.098876   74389 cri.go:89] found id: ""
	I0818 20:10:41.098904   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.098914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:41.098922   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:41.098981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:41.133199   74389 cri.go:89] found id: ""
	I0818 20:10:41.133222   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.133230   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:41.133241   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:41.133303   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:41.165565   74389 cri.go:89] found id: ""
	I0818 20:10:41.165591   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.165599   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:41.165604   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:41.165651   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:41.198602   74389 cri.go:89] found id: ""
	I0818 20:10:41.198626   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.198633   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:41.198639   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:41.198699   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:41.233800   74389 cri.go:89] found id: ""
	I0818 20:10:41.233825   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.233835   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:41.233842   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:41.233902   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:41.274838   74389 cri.go:89] found id: ""
	I0818 20:10:41.274864   74389 logs.go:276] 0 containers: []
	W0818 20:10:41.274874   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:41.274884   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:41.274898   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:41.325885   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:41.325917   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:41.342021   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:41.342053   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:41.420802   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:41.420824   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:41.420837   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:41.502301   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:41.502336   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:44.040299   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:44.054723   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:44.054803   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:44.089955   74389 cri.go:89] found id: ""
	I0818 20:10:44.089984   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.089995   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:44.090005   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:44.090080   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:44.124311   74389 cri.go:89] found id: ""
	I0818 20:10:44.124335   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.124346   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:44.124353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:44.124428   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:44.161476   74389 cri.go:89] found id: ""
	I0818 20:10:44.161499   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.161510   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:44.161518   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:44.161579   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:44.197918   74389 cri.go:89] found id: ""
	I0818 20:10:44.197947   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.197958   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:44.197965   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:44.198028   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:44.232500   74389 cri.go:89] found id: ""
	I0818 20:10:44.232529   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.232542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:44.232549   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:44.232611   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:44.272235   74389 cri.go:89] found id: ""
	I0818 20:10:44.272266   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.272290   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:44.272308   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:44.272371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:44.309330   74389 cri.go:89] found id: ""
	I0818 20:10:44.309361   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.309371   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:44.309378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:44.309447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:44.345477   74389 cri.go:89] found id: ""
	I0818 20:10:44.345503   74389 logs.go:276] 0 containers: []
	W0818 20:10:44.345511   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:44.345518   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:44.345531   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:44.400241   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:44.400273   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:44.414741   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:44.414769   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:44.480817   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:44.480840   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:44.480855   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:44.560108   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:44.560144   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:42.739140   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:44.739349   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.739985   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:43.681897   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:45.682347   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.682385   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:46.278368   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:48.777847   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:47.098957   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:47.114384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:47.114462   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:47.148323   74389 cri.go:89] found id: ""
	I0818 20:10:47.148352   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.148362   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:47.148369   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:47.148436   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:47.184840   74389 cri.go:89] found id: ""
	I0818 20:10:47.184866   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.184876   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:47.184883   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:47.184940   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:47.217797   74389 cri.go:89] found id: ""
	I0818 20:10:47.217825   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.217833   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:47.217839   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:47.217886   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:47.252578   74389 cri.go:89] found id: ""
	I0818 20:10:47.252606   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.252613   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:47.252620   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:47.252668   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:47.290258   74389 cri.go:89] found id: ""
	I0818 20:10:47.290284   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.290292   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:47.290297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:47.290344   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:47.324912   74389 cri.go:89] found id: ""
	I0818 20:10:47.324945   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.324955   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:47.324961   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:47.325017   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:47.361223   74389 cri.go:89] found id: ""
	I0818 20:10:47.361252   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.361262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:47.361269   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:47.361328   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:47.396089   74389 cri.go:89] found id: ""
	I0818 20:10:47.396115   74389 logs.go:276] 0 containers: []
	W0818 20:10:47.396126   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:47.396135   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:47.396150   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:47.409907   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:47.409933   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:47.478089   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:47.478111   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:47.478126   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:47.556503   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:47.556542   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:47.596076   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:47.596106   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.148336   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:50.161602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:50.161663   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:50.198782   74389 cri.go:89] found id: ""
	I0818 20:10:50.198809   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.198820   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:50.198827   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:50.198906   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:50.238201   74389 cri.go:89] found id: ""
	I0818 20:10:50.238227   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.238238   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:50.238245   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:50.238308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:50.275442   74389 cri.go:89] found id: ""
	I0818 20:10:50.275469   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.275480   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:50.275488   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:50.275545   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:50.310693   74389 cri.go:89] found id: ""
	I0818 20:10:50.310723   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.310733   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:50.310740   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:50.310804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:50.345284   74389 cri.go:89] found id: ""
	I0818 20:10:50.345315   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.345326   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:50.345334   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:50.345404   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:50.382517   74389 cri.go:89] found id: ""
	I0818 20:10:50.382548   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.382559   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:50.382567   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:50.382626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:50.418647   74389 cri.go:89] found id: ""
	I0818 20:10:50.418676   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.418686   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:50.418692   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:50.418749   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:50.455794   74389 cri.go:89] found id: ""
	I0818 20:10:50.455823   74389 logs.go:276] 0 containers: []
	W0818 20:10:50.455834   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:50.455844   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:50.455859   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:50.497547   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:50.497578   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:50.549672   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:50.549705   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:50.564023   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:50.564052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:50.636673   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:50.636703   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:50.636718   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:49.238888   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:51.239699   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.182672   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.683492   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:50.778683   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:52.778843   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:53.217021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:53.230249   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:53.230308   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:53.266305   74389 cri.go:89] found id: ""
	I0818 20:10:53.266339   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.266348   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:53.266354   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:53.266421   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:53.304148   74389 cri.go:89] found id: ""
	I0818 20:10:53.304177   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.304187   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:53.304194   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:53.304252   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:53.342568   74389 cri.go:89] found id: ""
	I0818 20:10:53.342591   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.342598   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:53.342603   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:53.342659   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:53.380610   74389 cri.go:89] found id: ""
	I0818 20:10:53.380634   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.380644   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:53.380652   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:53.380712   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:53.420667   74389 cri.go:89] found id: ""
	I0818 20:10:53.420690   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.420701   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:53.420715   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:53.420777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:53.457767   74389 cri.go:89] found id: ""
	I0818 20:10:53.457793   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.457805   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:53.457812   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:53.457879   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:53.495408   74389 cri.go:89] found id: ""
	I0818 20:10:53.495436   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.495450   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:53.495455   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:53.495525   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:53.539121   74389 cri.go:89] found id: ""
	I0818 20:10:53.539148   74389 logs.go:276] 0 containers: []
	W0818 20:10:53.539159   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:53.539169   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:53.539185   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:53.591783   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:53.591812   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:53.605207   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:53.605231   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:53.681186   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:53.681207   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:53.681219   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:53.759357   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:53.759414   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:53.240375   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.738235   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.181390   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.181940   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:55.278430   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:57.278961   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.778449   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:56.307021   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:56.319933   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:56.320007   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:56.354283   74389 cri.go:89] found id: ""
	I0818 20:10:56.354311   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.354322   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:56.354328   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:56.354392   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:56.387810   74389 cri.go:89] found id: ""
	I0818 20:10:56.387838   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.387848   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:56.387855   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:56.387916   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:56.421960   74389 cri.go:89] found id: ""
	I0818 20:10:56.421990   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.422001   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:56.422012   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:56.422075   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:56.456416   74389 cri.go:89] found id: ""
	I0818 20:10:56.456447   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.456457   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:56.456465   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:56.456529   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:56.490758   74389 cri.go:89] found id: ""
	I0818 20:10:56.490786   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.490797   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:56.490804   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:56.490866   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:56.525045   74389 cri.go:89] found id: ""
	I0818 20:10:56.525067   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.525075   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:56.525080   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:56.525140   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:56.564961   74389 cri.go:89] found id: ""
	I0818 20:10:56.564984   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.564992   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:56.564997   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:56.565049   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:56.599279   74389 cri.go:89] found id: ""
	I0818 20:10:56.599309   74389 logs.go:276] 0 containers: []
	W0818 20:10:56.599321   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:56.599330   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:56.599341   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:56.648806   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:56.648831   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:56.661962   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:56.661982   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:56.728522   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:56.728539   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:56.728551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:56.813552   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:56.813585   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:59.370353   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:10:59.383936   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:10:59.384019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:10:59.418003   74389 cri.go:89] found id: ""
	I0818 20:10:59.418030   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.418041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:10:59.418048   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:10:59.418112   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:10:59.450978   74389 cri.go:89] found id: ""
	I0818 20:10:59.451007   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.451018   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:10:59.451026   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:10:59.451088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:10:59.484958   74389 cri.go:89] found id: ""
	I0818 20:10:59.485002   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.485013   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:10:59.485020   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:10:59.485084   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:10:59.517762   74389 cri.go:89] found id: ""
	I0818 20:10:59.517790   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.517800   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:10:59.517807   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:10:59.517856   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:10:59.552411   74389 cri.go:89] found id: ""
	I0818 20:10:59.552435   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.552446   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:10:59.552453   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:10:59.552515   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:10:59.586395   74389 cri.go:89] found id: ""
	I0818 20:10:59.586417   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.586425   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:10:59.586432   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:10:59.586481   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:10:59.619093   74389 cri.go:89] found id: ""
	I0818 20:10:59.619116   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.619124   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:10:59.619129   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:10:59.619188   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:10:59.650718   74389 cri.go:89] found id: ""
	I0818 20:10:59.650743   74389 logs.go:276] 0 containers: []
	W0818 20:10:59.650754   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:10:59.650774   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:10:59.650799   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:10:59.702870   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:10:59.702902   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:10:59.717005   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:10:59.717031   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:10:59.786440   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:10:59.786459   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:10:59.786473   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:10:59.872849   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:10:59.872885   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:10:57.740046   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:00.239797   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:10:59.182402   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.182516   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:01.779677   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.277808   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:02.416347   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:02.430903   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:02.430970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:02.466045   74389 cri.go:89] found id: ""
	I0818 20:11:02.466072   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.466082   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:02.466090   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:02.466152   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:02.502392   74389 cri.go:89] found id: ""
	I0818 20:11:02.502424   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.502432   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:02.502438   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:02.502485   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:02.545654   74389 cri.go:89] found id: ""
	I0818 20:11:02.545677   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.545685   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:02.545691   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:02.545746   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:02.586013   74389 cri.go:89] found id: ""
	I0818 20:11:02.586035   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.586043   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:02.586048   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:02.586095   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:02.629186   74389 cri.go:89] found id: ""
	I0818 20:11:02.629212   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.629220   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:02.629226   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:02.629276   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:02.668825   74389 cri.go:89] found id: ""
	I0818 20:11:02.668851   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.668859   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:02.668865   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:02.669073   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:02.707453   74389 cri.go:89] found id: ""
	I0818 20:11:02.707479   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.707489   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:02.707495   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:02.707547   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:02.756621   74389 cri.go:89] found id: ""
	I0818 20:11:02.756653   74389 logs.go:276] 0 containers: []
	W0818 20:11:02.756665   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:02.756680   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:02.756697   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:02.795853   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:02.795901   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.849480   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:02.849516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:02.868881   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:02.868916   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:02.945890   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:02.945913   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:02.945928   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:05.532997   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:05.546758   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:05.546820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:05.583632   74389 cri.go:89] found id: ""
	I0818 20:11:05.583659   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.583671   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:05.583679   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:05.583733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:05.623614   74389 cri.go:89] found id: ""
	I0818 20:11:05.623643   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.623652   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:05.623661   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:05.623722   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:05.659578   74389 cri.go:89] found id: ""
	I0818 20:11:05.659605   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.659616   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:05.659623   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:05.659679   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:05.695837   74389 cri.go:89] found id: ""
	I0818 20:11:05.695865   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.695876   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:05.695884   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:05.695946   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:05.732359   74389 cri.go:89] found id: ""
	I0818 20:11:05.732386   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.732397   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:05.732404   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:05.732466   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:05.769971   74389 cri.go:89] found id: ""
	I0818 20:11:05.770002   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.770014   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:05.770022   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:05.770088   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:05.804709   74389 cri.go:89] found id: ""
	I0818 20:11:05.804735   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.804745   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:05.804753   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:05.804820   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:05.842074   74389 cri.go:89] found id: ""
	I0818 20:11:05.842103   74389 logs.go:276] 0 containers: []
	W0818 20:11:05.842113   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:05.842124   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:05.842139   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:05.880046   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:05.880073   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:02.739940   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:04.740702   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:07.239660   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:03.682270   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.682964   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:06.278085   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.781247   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:05.937301   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:05.937332   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:05.951990   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:05.952019   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:06.026629   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:06.026648   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:06.026662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:08.610001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:08.625152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:08.625226   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:08.659409   74389 cri.go:89] found id: ""
	I0818 20:11:08.659438   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.659448   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:08.659462   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:08.659521   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:08.697523   74389 cri.go:89] found id: ""
	I0818 20:11:08.697556   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.697567   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:08.697575   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:08.697640   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:08.738659   74389 cri.go:89] found id: ""
	I0818 20:11:08.738685   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.738697   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:08.738704   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:08.738754   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:08.776856   74389 cri.go:89] found id: ""
	I0818 20:11:08.776882   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.776892   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:08.776900   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:08.776961   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:08.814026   74389 cri.go:89] found id: ""
	I0818 20:11:08.814131   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.814144   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:08.814152   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:08.814218   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:08.851661   74389 cri.go:89] found id: ""
	I0818 20:11:08.851684   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.851697   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:08.851702   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:08.851760   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:08.887486   74389 cri.go:89] found id: ""
	I0818 20:11:08.887515   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.887523   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:08.887536   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:08.887600   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:08.924323   74389 cri.go:89] found id: ""
	I0818 20:11:08.924348   74389 logs.go:276] 0 containers: []
	W0818 20:11:08.924358   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:08.924368   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:08.924383   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:08.938657   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:08.938684   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:09.007452   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:09.007476   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:09.007491   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:09.085483   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:09.085520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:09.124893   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:09.124932   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:09.240113   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.739320   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:08.182148   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:10.681873   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:12.682490   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.278330   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:13.278868   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:11.680536   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:11.694296   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:11.694363   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:11.731465   74389 cri.go:89] found id: ""
	I0818 20:11:11.731488   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.731499   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:11.731507   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:11.731560   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:11.769463   74389 cri.go:89] found id: ""
	I0818 20:11:11.769487   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.769498   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:11.769506   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:11.769567   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:11.812336   74389 cri.go:89] found id: ""
	I0818 20:11:11.812360   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.812371   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:11.812378   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:11.812439   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:11.846097   74389 cri.go:89] found id: ""
	I0818 20:11:11.846119   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.846127   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:11.846133   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:11.846184   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:11.888212   74389 cri.go:89] found id: ""
	I0818 20:11:11.888240   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.888250   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:11.888258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:11.888315   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:11.924928   74389 cri.go:89] found id: ""
	I0818 20:11:11.924958   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.924970   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:11.924977   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:11.925037   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:11.959304   74389 cri.go:89] found id: ""
	I0818 20:11:11.959333   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.959345   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:11.959352   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:11.959438   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:11.992387   74389 cri.go:89] found id: ""
	I0818 20:11:11.992418   74389 logs.go:276] 0 containers: []
	W0818 20:11:11.992427   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:11.992435   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:11.992447   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:12.033929   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:12.033960   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:12.091078   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:12.091131   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:12.106337   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:12.106378   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:12.184704   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:12.184729   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:12.184756   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:14.763116   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:14.779294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:14.779416   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:14.815876   74389 cri.go:89] found id: ""
	I0818 20:11:14.815899   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.815907   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:14.815913   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:14.815970   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:14.852032   74389 cri.go:89] found id: ""
	I0818 20:11:14.852064   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.852075   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:14.852083   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:14.852153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:14.885249   74389 cri.go:89] found id: ""
	I0818 20:11:14.885276   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.885285   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:14.885290   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:14.885360   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:14.919462   74389 cri.go:89] found id: ""
	I0818 20:11:14.919495   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.919506   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:14.919514   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:14.919578   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:14.952642   74389 cri.go:89] found id: ""
	I0818 20:11:14.952668   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.952679   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:14.952687   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:14.952750   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:14.988506   74389 cri.go:89] found id: ""
	I0818 20:11:14.988581   74389 logs.go:276] 0 containers: []
	W0818 20:11:14.988595   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:14.988601   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:14.988658   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:15.025554   74389 cri.go:89] found id: ""
	I0818 20:11:15.025578   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.025588   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:15.025595   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:15.025655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:15.068467   74389 cri.go:89] found id: ""
	I0818 20:11:15.068498   74389 logs.go:276] 0 containers: []
	W0818 20:11:15.068509   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:15.068519   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:15.068532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:15.126578   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:15.126614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:15.139991   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:15.140020   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:15.220277   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:15.220313   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:15.220327   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:15.303557   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:15.303591   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:14.240198   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:16.739103   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.688049   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:15.779050   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.779324   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:17.848235   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:17.861067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:17.861134   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:17.894397   74389 cri.go:89] found id: ""
	I0818 20:11:17.894423   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.894433   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:17.894440   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:17.894498   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:17.930160   74389 cri.go:89] found id: ""
	I0818 20:11:17.930188   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.930197   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:17.930202   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:17.930248   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:17.963256   74389 cri.go:89] found id: ""
	I0818 20:11:17.963284   74389 logs.go:276] 0 containers: []
	W0818 20:11:17.963293   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:17.963300   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:17.963359   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:18.002254   74389 cri.go:89] found id: ""
	I0818 20:11:18.002278   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.002286   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:18.002291   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:18.002339   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:18.036367   74389 cri.go:89] found id: ""
	I0818 20:11:18.036393   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.036405   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:18.036417   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:18.036480   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:18.073130   74389 cri.go:89] found id: ""
	I0818 20:11:18.073154   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.073165   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:18.073173   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:18.073236   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:18.114232   74389 cri.go:89] found id: ""
	I0818 20:11:18.114255   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.114262   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:18.114272   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:18.114331   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:18.146262   74389 cri.go:89] found id: ""
	I0818 20:11:18.146292   74389 logs.go:276] 0 containers: []
	W0818 20:11:18.146305   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:18.146315   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:18.146328   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:18.229041   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:18.229074   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:18.269856   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:18.269882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:18.324499   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:18.324537   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:18.338780   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:18.338802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:18.408222   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:20.908890   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:20.925338   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:20.925401   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:19.238499   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:21.239793   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.181477   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.181514   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.278360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:22.779285   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:20.971851   74389 cri.go:89] found id: ""
	I0818 20:11:20.971884   74389 logs.go:276] 0 containers: []
	W0818 20:11:20.971894   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:20.971901   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:20.971960   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:21.034359   74389 cri.go:89] found id: ""
	I0818 20:11:21.034440   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.034466   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:21.034484   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:21.034555   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:21.071565   74389 cri.go:89] found id: ""
	I0818 20:11:21.071588   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.071596   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:21.071602   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:21.071647   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:21.104909   74389 cri.go:89] found id: ""
	I0818 20:11:21.104937   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.104948   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:21.104955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:21.105005   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:21.148014   74389 cri.go:89] found id: ""
	I0818 20:11:21.148042   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.148052   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:21.148058   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:21.148120   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:21.183417   74389 cri.go:89] found id: ""
	I0818 20:11:21.183444   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.183453   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:21.183460   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:21.183517   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:21.218057   74389 cri.go:89] found id: ""
	I0818 20:11:21.218091   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.218099   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:21.218105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:21.218153   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:21.260043   74389 cri.go:89] found id: ""
	I0818 20:11:21.260069   74389 logs.go:276] 0 containers: []
	W0818 20:11:21.260076   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:21.260084   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:21.260095   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:21.302858   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:21.302883   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:21.356941   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:21.356973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:21.372225   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:21.372252   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:21.446627   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:21.446647   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:21.446662   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.028529   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:24.042299   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:24.042371   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:24.078586   74389 cri.go:89] found id: ""
	I0818 20:11:24.078621   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.078631   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:24.078639   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:24.078706   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:24.119129   74389 cri.go:89] found id: ""
	I0818 20:11:24.119156   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.119168   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:24.119175   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:24.119233   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:24.157543   74389 cri.go:89] found id: ""
	I0818 20:11:24.157571   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.157582   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:24.157589   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:24.157648   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:24.191925   74389 cri.go:89] found id: ""
	I0818 20:11:24.191948   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.191959   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:24.191970   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:24.192038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:24.228165   74389 cri.go:89] found id: ""
	I0818 20:11:24.228194   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.228206   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:24.228214   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:24.228277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:24.267727   74389 cri.go:89] found id: ""
	I0818 20:11:24.267758   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.267766   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:24.267771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:24.267830   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:24.303103   74389 cri.go:89] found id: ""
	I0818 20:11:24.303131   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.303142   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:24.303148   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:24.303217   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:24.339118   74389 cri.go:89] found id: ""
	I0818 20:11:24.339155   74389 logs.go:276] 0 containers: []
	W0818 20:11:24.339173   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:24.339183   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:24.339198   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:24.387767   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:24.387802   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:24.402161   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:24.402195   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:24.472445   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:24.472465   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:24.472478   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:24.551481   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:24.551520   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:23.739816   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.243360   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:24.182434   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:26.182980   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:25.277558   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.278088   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:29.278655   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:27.091492   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:27.104902   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:27.104974   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:27.140166   74389 cri.go:89] found id: ""
	I0818 20:11:27.140191   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.140200   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:27.140207   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:27.140264   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:27.174003   74389 cri.go:89] found id: ""
	I0818 20:11:27.174029   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.174038   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:27.174045   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:27.174105   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:27.210056   74389 cri.go:89] found id: ""
	I0818 20:11:27.210086   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.210097   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:27.210105   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:27.210165   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:27.247487   74389 cri.go:89] found id: ""
	I0818 20:11:27.247514   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.247524   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:27.247532   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:27.247588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:27.285557   74389 cri.go:89] found id: ""
	I0818 20:11:27.285580   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.285590   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:27.285597   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:27.285662   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:27.320763   74389 cri.go:89] found id: ""
	I0818 20:11:27.320792   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.320804   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:27.320811   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:27.320870   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:27.359154   74389 cri.go:89] found id: ""
	I0818 20:11:27.359179   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.359187   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:27.359192   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:27.359239   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:27.393923   74389 cri.go:89] found id: ""
	I0818 20:11:27.393945   74389 logs.go:276] 0 containers: []
	W0818 20:11:27.393955   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:27.393964   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:27.393974   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:27.445600   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:27.445631   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:27.459446   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:27.459471   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:27.529495   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:27.529520   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:27.529532   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:27.611416   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:27.611459   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:30.149545   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:30.162765   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:30.162834   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:30.200277   74389 cri.go:89] found id: ""
	I0818 20:11:30.200302   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.200312   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:30.200320   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:30.200373   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:30.234895   74389 cri.go:89] found id: ""
	I0818 20:11:30.234918   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.234926   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:30.234932   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:30.234977   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:30.268504   74389 cri.go:89] found id: ""
	I0818 20:11:30.268533   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.268543   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:30.268550   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:30.268614   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:30.308019   74389 cri.go:89] found id: ""
	I0818 20:11:30.308048   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.308059   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:30.308067   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:30.308130   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:30.343513   74389 cri.go:89] found id: ""
	I0818 20:11:30.343535   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.343542   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:30.343548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:30.343596   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:30.379087   74389 cri.go:89] found id: ""
	I0818 20:11:30.379110   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.379119   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:30.379124   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:30.379180   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:30.415859   74389 cri.go:89] found id: ""
	I0818 20:11:30.415887   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.415897   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:30.415905   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:30.415972   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:30.450670   74389 cri.go:89] found id: ""
	I0818 20:11:30.450699   74389 logs.go:276] 0 containers: []
	W0818 20:11:30.450710   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:30.450721   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:30.450737   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:30.503566   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:30.503603   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:30.517355   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:30.517382   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:30.587512   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:30.587531   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:30.587545   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:30.665708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:30.665745   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:28.739673   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.238716   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:28.681620   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:30.682755   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:32.682969   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:31.778900   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:33.205661   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:33.218962   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:33.219024   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:33.260011   74389 cri.go:89] found id: ""
	I0818 20:11:33.260033   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.260041   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:33.260046   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:33.260104   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:33.295351   74389 cri.go:89] found id: ""
	I0818 20:11:33.295396   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.295407   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:33.295415   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:33.295475   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:33.330857   74389 cri.go:89] found id: ""
	I0818 20:11:33.330882   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.330890   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:33.330895   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:33.330942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:33.367581   74389 cri.go:89] found id: ""
	I0818 20:11:33.367612   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.367623   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:33.367631   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:33.367691   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:33.404913   74389 cri.go:89] found id: ""
	I0818 20:11:33.404940   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.404950   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:33.404957   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:33.405019   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:33.450695   74389 cri.go:89] found id: ""
	I0818 20:11:33.450725   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.450736   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:33.450743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:33.450809   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:33.485280   74389 cri.go:89] found id: ""
	I0818 20:11:33.485309   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.485319   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:33.485327   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:33.485387   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:33.525648   74389 cri.go:89] found id: ""
	I0818 20:11:33.525678   74389 logs.go:276] 0 containers: []
	W0818 20:11:33.525688   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:33.525698   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:33.525710   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:33.579487   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:33.579516   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:33.593959   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:33.593984   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:33.659528   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:33.659545   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:33.659556   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:33.739787   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:33.739819   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:33.240237   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.739311   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:35.182357   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:37.682275   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.278357   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:38.279370   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:36.285367   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:36.298365   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:36.298431   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:36.334171   74389 cri.go:89] found id: ""
	I0818 20:11:36.334194   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.334205   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:36.334214   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:36.334278   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:36.372296   74389 cri.go:89] found id: ""
	I0818 20:11:36.372331   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.372342   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:36.372353   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:36.372419   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:36.411546   74389 cri.go:89] found id: ""
	I0818 20:11:36.411576   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.411585   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:36.411593   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:36.411656   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:36.449655   74389 cri.go:89] found id: ""
	I0818 20:11:36.449686   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.449697   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:36.449708   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:36.449782   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:36.488790   74389 cri.go:89] found id: ""
	I0818 20:11:36.488814   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.488821   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:36.488827   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:36.488880   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:36.522569   74389 cri.go:89] found id: ""
	I0818 20:11:36.522596   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.522606   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:36.522614   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:36.522674   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:36.557828   74389 cri.go:89] found id: ""
	I0818 20:11:36.557856   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.557866   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:36.557873   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:36.557934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:36.590632   74389 cri.go:89] found id: ""
	I0818 20:11:36.590658   74389 logs.go:276] 0 containers: []
	W0818 20:11:36.590669   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:36.590678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:36.590699   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:36.659655   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:36.659676   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:36.659690   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:36.739199   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:36.739225   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:36.778951   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:36.778973   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:36.833116   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:36.833167   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.349149   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:39.362568   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:39.362639   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:39.397441   74389 cri.go:89] found id: ""
	I0818 20:11:39.397467   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.397475   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:39.397480   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:39.397536   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:39.431110   74389 cri.go:89] found id: ""
	I0818 20:11:39.431137   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.431146   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:39.431153   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:39.431202   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:39.465263   74389 cri.go:89] found id: ""
	I0818 20:11:39.465288   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.465296   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:39.465302   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:39.465353   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:39.498721   74389 cri.go:89] found id: ""
	I0818 20:11:39.498746   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.498754   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:39.498759   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:39.498804   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:39.533151   74389 cri.go:89] found id: ""
	I0818 20:11:39.533178   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.533186   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:39.533191   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:39.533250   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:39.566818   74389 cri.go:89] found id: ""
	I0818 20:11:39.566845   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.566853   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:39.566859   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:39.566905   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:39.598699   74389 cri.go:89] found id: ""
	I0818 20:11:39.598722   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.598729   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:39.598734   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:39.598781   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:39.637666   74389 cri.go:89] found id: ""
	I0818 20:11:39.637693   74389 logs.go:276] 0 containers: []
	W0818 20:11:39.637702   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:39.637710   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:39.637721   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:39.693904   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:39.693936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:39.707678   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:39.707703   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:39.779936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:39.779955   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:39.779969   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:39.859799   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:39.859832   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:38.239229   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.240416   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:39.682587   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.187237   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:40.779225   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.779359   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.779661   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:42.399941   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:42.413140   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:42.413203   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:42.447972   74389 cri.go:89] found id: ""
	I0818 20:11:42.448001   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.448013   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:42.448020   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:42.448079   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:42.481806   74389 cri.go:89] found id: ""
	I0818 20:11:42.481834   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.481846   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:42.481854   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:42.481912   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:42.517446   74389 cri.go:89] found id: ""
	I0818 20:11:42.517477   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.517488   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:42.517496   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:42.517551   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:42.552046   74389 cri.go:89] found id: ""
	I0818 20:11:42.552070   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.552077   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:42.552083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:42.552128   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:42.587811   74389 cri.go:89] found id: ""
	I0818 20:11:42.587842   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.587855   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:42.587862   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:42.587918   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:42.621541   74389 cri.go:89] found id: ""
	I0818 20:11:42.621565   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.621573   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:42.621579   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:42.621626   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:42.659632   74389 cri.go:89] found id: ""
	I0818 20:11:42.659656   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.659665   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:42.659671   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:42.659718   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:42.694060   74389 cri.go:89] found id: ""
	I0818 20:11:42.694084   74389 logs.go:276] 0 containers: []
	W0818 20:11:42.694093   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:42.694103   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:42.694117   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:42.737579   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:42.737604   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:42.792481   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:42.792507   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:42.806701   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:42.806727   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:42.874878   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:42.874903   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:42.874918   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:45.460859   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:45.473430   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:45.473507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:45.513146   74389 cri.go:89] found id: ""
	I0818 20:11:45.513171   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.513180   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:45.513185   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:45.513242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:45.547911   74389 cri.go:89] found id: ""
	I0818 20:11:45.547938   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.547946   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:45.547956   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:45.548014   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:45.581607   74389 cri.go:89] found id: ""
	I0818 20:11:45.581630   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.581639   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:45.581646   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:45.581703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:45.617481   74389 cri.go:89] found id: ""
	I0818 20:11:45.617504   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.617512   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:45.617517   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:45.617563   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:45.654613   74389 cri.go:89] found id: ""
	I0818 20:11:45.654639   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.654646   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:45.654651   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:45.654703   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:45.689937   74389 cri.go:89] found id: ""
	I0818 20:11:45.689968   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.689978   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:45.689988   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:45.690047   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:45.728503   74389 cri.go:89] found id: ""
	I0818 20:11:45.728528   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.728537   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:45.728543   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:45.728588   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:45.763888   74389 cri.go:89] found id: ""
	I0818 20:11:45.763911   74389 logs.go:276] 0 containers: []
	W0818 20:11:45.763918   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:45.763926   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:45.763936   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:45.817990   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:45.818025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:45.832816   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:45.832847   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:45.908386   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:45.908414   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:45.908430   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:42.739642   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.240529   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:44.681898   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:46.683048   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:47.283360   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.780428   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:45.984955   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:45.984997   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:48.523620   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:48.536683   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:48.536743   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:48.575181   74389 cri.go:89] found id: ""
	I0818 20:11:48.575209   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.575219   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:48.575225   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:48.575277   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:48.616215   74389 cri.go:89] found id: ""
	I0818 20:11:48.616240   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.616249   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:48.616257   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:48.616310   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:48.653211   74389 cri.go:89] found id: ""
	I0818 20:11:48.653243   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.653254   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:48.653262   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:48.653324   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:48.688595   74389 cri.go:89] found id: ""
	I0818 20:11:48.688622   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.688630   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:48.688636   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:48.688681   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:48.724617   74389 cri.go:89] found id: ""
	I0818 20:11:48.724640   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.724649   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:48.724654   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:48.724701   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:48.767352   74389 cri.go:89] found id: ""
	I0818 20:11:48.767392   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.767401   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:48.767407   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:48.767468   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:48.806054   74389 cri.go:89] found id: ""
	I0818 20:11:48.806114   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.806128   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:48.806136   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:48.806204   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:48.843508   74389 cri.go:89] found id: ""
	I0818 20:11:48.843530   74389 logs.go:276] 0 containers: []
	W0818 20:11:48.843537   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:48.843545   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:48.843560   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:48.896074   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:48.896113   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:48.910035   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:48.910059   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:48.976115   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:48.976137   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:48.976154   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:49.056851   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:49.056882   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:47.739118   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.740073   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.238919   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:49.182997   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.682384   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:52.279233   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:54.779470   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:51.611935   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:51.624790   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:51.624867   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:51.665680   74389 cri.go:89] found id: ""
	I0818 20:11:51.665714   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.665725   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:51.665733   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:51.665788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:51.700399   74389 cri.go:89] found id: ""
	I0818 20:11:51.700420   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.700427   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:51.700433   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:51.700493   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:51.737046   74389 cri.go:89] found id: ""
	I0818 20:11:51.737070   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.737078   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:51.737083   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:51.737135   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:51.772299   74389 cri.go:89] found id: ""
	I0818 20:11:51.772324   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.772334   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:51.772342   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:51.772415   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:51.808493   74389 cri.go:89] found id: ""
	I0818 20:11:51.808534   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.808545   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:51.808552   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:51.808624   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:51.843887   74389 cri.go:89] found id: ""
	I0818 20:11:51.843923   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.843934   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:51.843942   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:51.844006   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:51.879230   74389 cri.go:89] found id: ""
	I0818 20:11:51.879258   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.879269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:51.879276   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:51.879335   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:51.914698   74389 cri.go:89] found id: ""
	I0818 20:11:51.914726   74389 logs.go:276] 0 containers: []
	W0818 20:11:51.914736   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:51.914747   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:51.914762   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:51.952205   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:51.952238   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:52.003520   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:52.003551   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:52.017368   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:52.017393   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:52.087046   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:52.087066   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:52.087078   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.679311   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:54.692319   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:54.692382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:54.733788   74389 cri.go:89] found id: ""
	I0818 20:11:54.733818   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.733829   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:54.733837   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:54.733900   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:54.776964   74389 cri.go:89] found id: ""
	I0818 20:11:54.776988   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.776995   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:54.777001   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:54.777056   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:54.811815   74389 cri.go:89] found id: ""
	I0818 20:11:54.811844   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.811854   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:54.811861   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:54.811923   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:54.865793   74389 cri.go:89] found id: ""
	I0818 20:11:54.865823   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.865833   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:54.865841   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:54.865899   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:54.900213   74389 cri.go:89] found id: ""
	I0818 20:11:54.900241   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.900251   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:54.900258   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:54.900322   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:54.933654   74389 cri.go:89] found id: ""
	I0818 20:11:54.933681   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.933691   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:54.933699   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:54.933771   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:54.967704   74389 cri.go:89] found id: ""
	I0818 20:11:54.967730   74389 logs.go:276] 0 containers: []
	W0818 20:11:54.967738   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:54.967743   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:54.967788   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:55.003783   74389 cri.go:89] found id: ""
	I0818 20:11:55.003807   74389 logs.go:276] 0 containers: []
	W0818 20:11:55.003817   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:55.003828   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:55.003842   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:11:55.042208   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:55.042241   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:55.092589   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:55.092625   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:55.106456   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:55.106483   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:55.178397   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:55.178415   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:55.178429   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:54.239638   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:56.240123   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:53.682822   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:55.683248   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.279035   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:59.779371   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:57.759304   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:11:57.771969   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:11:57.772038   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:11:57.808468   74389 cri.go:89] found id: ""
	I0818 20:11:57.808498   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.808508   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:11:57.808515   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:11:57.808575   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:11:57.842991   74389 cri.go:89] found id: ""
	I0818 20:11:57.843017   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.843027   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:11:57.843034   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:11:57.843097   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:57.882881   74389 cri.go:89] found id: ""
	I0818 20:11:57.882906   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.882914   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:11:57.882919   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:11:57.882966   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:11:57.918255   74389 cri.go:89] found id: ""
	I0818 20:11:57.918281   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.918291   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:11:57.918297   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:11:57.918345   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:11:57.952172   74389 cri.go:89] found id: ""
	I0818 20:11:57.952209   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.952218   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:11:57.952223   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:11:57.952319   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:11:57.985614   74389 cri.go:89] found id: ""
	I0818 20:11:57.985643   74389 logs.go:276] 0 containers: []
	W0818 20:11:57.985655   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:11:57.985662   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:11:57.985732   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:11:58.019506   74389 cri.go:89] found id: ""
	I0818 20:11:58.019531   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.019542   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:11:58.019548   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:11:58.019615   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:11:58.055793   74389 cri.go:89] found id: ""
	I0818 20:11:58.055826   74389 logs.go:276] 0 containers: []
	W0818 20:11:58.055838   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:11:58.055848   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:11:58.055863   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:11:58.111254   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:11:58.111295   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:11:58.125272   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:11:58.125309   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:11:58.194553   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:11:58.194582   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:11:58.194597   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:11:58.278559   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:11:58.278588   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:00.830001   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:00.842955   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:00.843033   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:00.879527   74389 cri.go:89] found id: ""
	I0818 20:12:00.879553   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.879561   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:00.879568   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:00.879620   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:00.915625   74389 cri.go:89] found id: ""
	I0818 20:12:00.915655   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.915666   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:00.915673   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:00.915733   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:11:58.240182   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.240387   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:11:58.182085   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.682855   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:02.278506   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:04.279952   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:00.950556   74389 cri.go:89] found id: ""
	I0818 20:12:00.950580   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.950589   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:00.950594   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:00.950641   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:00.985343   74389 cri.go:89] found id: ""
	I0818 20:12:00.985370   74389 logs.go:276] 0 containers: []
	W0818 20:12:00.985380   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:00.985386   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:00.985435   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:01.020836   74389 cri.go:89] found id: ""
	I0818 20:12:01.020862   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.020870   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:01.020876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:01.020934   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:01.057769   74389 cri.go:89] found id: ""
	I0818 20:12:01.057795   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.057807   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:01.057815   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:01.057876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:01.093238   74389 cri.go:89] found id: ""
	I0818 20:12:01.093261   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.093269   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:01.093275   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:01.093327   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:01.131626   74389 cri.go:89] found id: ""
	I0818 20:12:01.131650   74389 logs.go:276] 0 containers: []
	W0818 20:12:01.131660   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:01.131670   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:01.131685   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:01.171909   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:01.171934   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:01.228133   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:01.228165   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:01.247215   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:01.247251   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:01.344927   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:01.344948   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:01.344962   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:03.933110   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:03.948007   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:03.948087   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:03.989697   74389 cri.go:89] found id: ""
	I0818 20:12:03.989722   74389 logs.go:276] 0 containers: []
	W0818 20:12:03.989732   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:03.989751   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:03.989833   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:04.026893   74389 cri.go:89] found id: ""
	I0818 20:12:04.026920   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.026931   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:04.026938   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:04.026993   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:04.063857   74389 cri.go:89] found id: ""
	I0818 20:12:04.063889   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.063901   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:04.063908   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:04.063967   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:04.099164   74389 cri.go:89] found id: ""
	I0818 20:12:04.099183   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.099190   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:04.099196   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:04.099242   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:04.136421   74389 cri.go:89] found id: ""
	I0818 20:12:04.136449   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.136461   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:04.136468   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:04.136530   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:04.173728   74389 cri.go:89] found id: ""
	I0818 20:12:04.173753   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.173764   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:04.173771   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:04.173832   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:04.209534   74389 cri.go:89] found id: ""
	I0818 20:12:04.209558   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.209568   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:04.209575   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:04.209637   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:04.246772   74389 cri.go:89] found id: ""
	I0818 20:12:04.246800   74389 logs.go:276] 0 containers: []
	W0818 20:12:04.246813   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:04.246823   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:04.246839   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:04.289878   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:04.289909   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:04.343243   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:04.343279   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:04.359538   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:04.359565   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:04.429996   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:04.430021   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:04.430034   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:02.739623   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.239503   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.240563   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:03.182703   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:05.183099   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.682942   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:06.780051   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:09.283183   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:07.013984   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:07.030554   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:07.030633   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:07.075824   74389 cri.go:89] found id: ""
	I0818 20:12:07.075854   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.075861   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:07.075867   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:07.075929   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:07.121869   74389 cri.go:89] found id: ""
	I0818 20:12:07.121903   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.121915   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:07.121922   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:07.121984   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:07.161913   74389 cri.go:89] found id: ""
	I0818 20:12:07.161943   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.161955   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:07.161963   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:07.162021   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:07.212344   74389 cri.go:89] found id: ""
	I0818 20:12:07.212370   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.212377   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:07.212384   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:07.212447   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:07.250641   74389 cri.go:89] found id: ""
	I0818 20:12:07.250672   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.250683   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:07.250690   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:07.250751   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:07.287960   74389 cri.go:89] found id: ""
	I0818 20:12:07.287987   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.287995   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:07.288000   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:07.288059   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:07.323005   74389 cri.go:89] found id: ""
	I0818 20:12:07.323028   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.323036   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:07.323041   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:07.323089   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:07.359438   74389 cri.go:89] found id: ""
	I0818 20:12:07.359463   74389 logs.go:276] 0 containers: []
	W0818 20:12:07.359471   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:07.359479   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:07.359490   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:07.399339   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:07.399370   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:07.451878   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:07.451914   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:07.466171   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:07.466196   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:07.537853   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:07.537878   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:07.537895   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.120071   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:10.133489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:10.133570   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:10.173725   74389 cri.go:89] found id: ""
	I0818 20:12:10.173749   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.173758   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:10.173766   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:10.173826   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:10.211727   74389 cri.go:89] found id: ""
	I0818 20:12:10.211750   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.211758   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:10.211764   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:10.211825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:10.254724   74389 cri.go:89] found id: ""
	I0818 20:12:10.254751   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.254762   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:10.254769   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:10.254825   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:10.292458   74389 cri.go:89] found id: ""
	I0818 20:12:10.292477   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.292484   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:10.292489   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:10.292546   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:10.326410   74389 cri.go:89] found id: ""
	I0818 20:12:10.326435   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.326442   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:10.326447   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:10.326495   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:10.364962   74389 cri.go:89] found id: ""
	I0818 20:12:10.364992   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.365003   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:10.365010   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:10.365064   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:10.407866   74389 cri.go:89] found id: ""
	I0818 20:12:10.407893   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.407902   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:10.407909   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:10.407980   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:10.446108   74389 cri.go:89] found id: ""
	I0818 20:12:10.446130   74389 logs.go:276] 0 containers: []
	W0818 20:12:10.446138   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:10.446146   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:10.446159   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:10.496408   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:10.496439   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:10.510760   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:10.510790   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:10.586328   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:10.586348   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:10.586359   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:10.668708   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:10.668746   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:09.738372   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.738978   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:10.183297   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:12.682617   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:11.778895   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.779613   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:13.213370   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:13.226701   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:13.226774   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:13.271397   74389 cri.go:89] found id: ""
	I0818 20:12:13.271426   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.271437   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:13.271446   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:13.271507   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:13.314769   74389 cri.go:89] found id: ""
	I0818 20:12:13.314795   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.314803   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:13.314809   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:13.314855   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:13.355639   74389 cri.go:89] found id: ""
	I0818 20:12:13.355665   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.355674   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:13.355680   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:13.355728   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:13.399051   74389 cri.go:89] found id: ""
	I0818 20:12:13.399075   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.399083   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:13.399089   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:13.399136   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:13.432248   74389 cri.go:89] found id: ""
	I0818 20:12:13.432276   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.432288   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:13.432294   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:13.432356   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:13.466882   74389 cri.go:89] found id: ""
	I0818 20:12:13.466908   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.466918   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:13.466925   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:13.466983   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:13.506017   74389 cri.go:89] found id: ""
	I0818 20:12:13.506044   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.506055   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:13.506062   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:13.506111   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:13.543846   74389 cri.go:89] found id: ""
	I0818 20:12:13.543867   74389 logs.go:276] 0 containers: []
	W0818 20:12:13.543875   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:13.543882   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:13.543893   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:13.598604   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:13.598638   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:13.613226   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:13.613253   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:13.683353   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:13.683374   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:13.683411   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:13.771944   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:13.771981   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:14.239433   73815 pod_ready.go:103] pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:14.733714   73815 pod_ready.go:82] duration metric: took 4m0.000909376s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:14.733756   73815 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-g2kt7" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:14.733773   73815 pod_ready.go:39] duration metric: took 4m10.006922238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:14.733798   73815 kubeadm.go:597] duration metric: took 4m18.227938977s to restartPrimaryControlPlane
	W0818 20:12:14.733854   73815 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:14.733884   73815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:15.182539   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:17.682113   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.278810   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:18.279513   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:16.313712   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:16.328316   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:16.328382   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:16.361909   74389 cri.go:89] found id: ""
	I0818 20:12:16.361939   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.361947   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:16.361955   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:16.362015   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:16.402293   74389 cri.go:89] found id: ""
	I0818 20:12:16.402322   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.402334   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:16.402341   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:16.402407   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:16.441988   74389 cri.go:89] found id: ""
	I0818 20:12:16.442016   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.442027   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:16.442034   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:16.442101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:16.473853   74389 cri.go:89] found id: ""
	I0818 20:12:16.473876   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.473884   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:16.473889   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:16.473942   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:16.505830   74389 cri.go:89] found id: ""
	I0818 20:12:16.505857   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.505871   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:16.505876   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:16.505922   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:16.538782   74389 cri.go:89] found id: ""
	I0818 20:12:16.538805   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.538813   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:16.538819   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:16.538876   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:16.573665   74389 cri.go:89] found id: ""
	I0818 20:12:16.573693   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.573703   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:16.573711   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:16.573777   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:16.608961   74389 cri.go:89] found id: ""
	I0818 20:12:16.608988   74389 logs.go:276] 0 containers: []
	W0818 20:12:16.608999   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:16.609010   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:16.609025   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:16.686936   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:16.686952   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:16.686963   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:16.771373   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:16.771421   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:16.810409   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:16.810432   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:16.861987   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:16.862021   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.376796   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:19.389877   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:12:19.389943   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:12:19.429601   74389 cri.go:89] found id: ""
	I0818 20:12:19.429636   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.429647   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:12:19.429655   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:12:19.429715   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:12:19.468167   74389 cri.go:89] found id: ""
	I0818 20:12:19.468192   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.468204   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:12:19.468212   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:12:19.468259   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:12:19.506356   74389 cri.go:89] found id: ""
	I0818 20:12:19.506385   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.506396   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:12:19.506402   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:12:19.506459   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:12:19.544808   74389 cri.go:89] found id: ""
	I0818 20:12:19.544831   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.544839   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:12:19.544844   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:12:19.544897   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:12:19.579272   74389 cri.go:89] found id: ""
	I0818 20:12:19.579296   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.579307   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:12:19.579314   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:12:19.579399   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:12:19.612814   74389 cri.go:89] found id: ""
	I0818 20:12:19.612851   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.612863   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:12:19.612870   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:12:19.612945   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:12:19.646550   74389 cri.go:89] found id: ""
	I0818 20:12:19.646580   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.646590   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:12:19.646598   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:12:19.646655   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:12:19.680659   74389 cri.go:89] found id: ""
	I0818 20:12:19.680682   74389 logs.go:276] 0 containers: []
	W0818 20:12:19.680689   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:12:19.680697   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:12:19.680709   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:12:19.729173   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:12:19.729206   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:12:19.745104   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:12:19.745135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:12:19.823324   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:12:19.823345   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:12:19.823357   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:12:19.915046   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:12:19.915091   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:12:19.682712   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.182462   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:20.777741   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.779468   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:24.785394   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:22.458460   74389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:12:22.471849   74389 kubeadm.go:597] duration metric: took 4m3.535048026s to restartPrimaryControlPlane
	W0818 20:12:22.471923   74389 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:22.471953   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:23.883469   74389 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.411493783s)
	I0818 20:12:23.883548   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:23.897846   74389 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:23.908839   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:23.919251   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:23.919273   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:23.919317   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:23.929306   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:23.929385   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:23.939882   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:23.949270   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:23.949321   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:23.959179   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.968351   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:23.968411   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:23.978122   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:23.987324   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:23.987373   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:23.996776   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:24.209037   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:24.682001   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.182491   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:27.278406   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.279272   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:29.682104   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:32.181795   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:31.779163   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:33.782706   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:34.183088   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.682409   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:36.278136   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:38.278938   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.943045   73815 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.209137834s)
	I0818 20:12:40.943131   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:12:40.961902   73815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:12:40.984956   73815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:12:41.000828   73815 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:12:41.000855   73815 kubeadm.go:157] found existing configuration files:
	
	I0818 20:12:41.000908   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:12:41.019730   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:12:41.019782   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:12:41.031694   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:12:41.052082   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:12:41.052133   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:12:41.061682   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.070983   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:12:41.071036   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:12:41.083122   73815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:12:41.092977   73815 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:12:41.093041   73815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:12:41.103081   73815 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:12:41.155300   73815 kubeadm.go:310] W0818 20:12:41.112032    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.156131   73815 kubeadm.go:310] W0818 20:12:41.113028    2558 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:12:41.270071   73815 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:12:39.183290   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:41.682301   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:40.777979   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:42.779754   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:44.779992   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:43.683501   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:46.181489   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.616338   73815 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:12:49.616432   73815 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:12:49.616546   73815 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:12:49.616675   73815 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:12:49.616784   73815 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:12:49.616877   73815 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:12:49.618287   73815 out.go:235]   - Generating certificates and keys ...
	I0818 20:12:49.618354   73815 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:12:49.618414   73815 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:12:49.618486   73815 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:12:49.618537   73815 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:12:49.618598   73815 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:12:49.618648   73815 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:12:49.618700   73815 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:12:49.618779   73815 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:12:49.618892   73815 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:12:49.619007   73815 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:12:49.619065   73815 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:12:49.619163   73815 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:12:49.619214   73815 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:12:49.619269   73815 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:12:49.619331   73815 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:12:49.619436   73815 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:12:49.619486   73815 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:12:49.619556   73815 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:12:49.619619   73815 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:12:49.621003   73815 out.go:235]   - Booting up control plane ...
	I0818 20:12:49.621109   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:12:49.621195   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:12:49.621272   73815 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:12:49.621380   73815 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:12:49.621464   73815 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:12:49.621507   73815 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:12:49.621621   73815 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:12:49.621715   73815 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:12:49.621773   73815 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.427168ms
	I0818 20:12:49.621843   73815 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:12:49.621894   73815 kubeadm.go:310] [api-check] The API server is healthy after 5.00297116s
	I0818 20:12:49.621989   73815 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:12:49.622127   73815 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:12:49.622192   73815 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:12:49.622366   73815 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-291295 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:12:49.622416   73815 kubeadm.go:310] [bootstrap-token] Using token: y7e2le.i0q1jk5v0c0u0zuw
	I0818 20:12:49.623896   73815 out.go:235]   - Configuring RBAC rules ...
	I0818 20:12:49.623979   73815 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:12:49.624091   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:12:49.624245   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:12:49.624354   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:12:49.624455   73815 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:12:49.624526   73815 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:12:49.624621   73815 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:12:49.624675   73815 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:12:49.624718   73815 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:12:49.624724   73815 kubeadm.go:310] 
	I0818 20:12:49.624819   73815 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:12:49.624835   73815 kubeadm.go:310] 
	I0818 20:12:49.624933   73815 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:12:49.624943   73815 kubeadm.go:310] 
	I0818 20:12:49.624975   73815 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:12:49.625066   73815 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:12:49.625122   73815 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:12:49.625135   73815 kubeadm.go:310] 
	I0818 20:12:49.625210   73815 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:12:49.625217   73815 kubeadm.go:310] 
	I0818 20:12:49.625285   73815 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:12:49.625295   73815 kubeadm.go:310] 
	I0818 20:12:49.625364   73815 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:12:49.625469   73815 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:12:49.625552   73815 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:12:49.625563   73815 kubeadm.go:310] 
	I0818 20:12:49.625675   73815 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:12:49.625756   73815 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:12:49.625763   73815 kubeadm.go:310] 
	I0818 20:12:49.625858   73815 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.625943   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:12:49.625967   73815 kubeadm.go:310] 	--control-plane 
	I0818 20:12:49.625976   73815 kubeadm.go:310] 
	I0818 20:12:49.626089   73815 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:12:49.626099   73815 kubeadm.go:310] 
	I0818 20:12:49.626196   73815 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y7e2le.i0q1jk5v0c0u0zuw \
	I0818 20:12:49.626293   73815 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:12:49.626302   73815 cni.go:84] Creating CNI manager for ""
	I0818 20:12:49.626308   73815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:12:49.627714   73815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:12:47.280266   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.779502   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:49.628998   73815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:12:49.639640   73815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:12:49.657017   73815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:49.657102   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-291295 minikube.k8s.io/updated_at=2024_08_18T20_12_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=embed-certs-291295 minikube.k8s.io/primary=true
	I0818 20:12:49.685420   73815 ops.go:34] apiserver oom_adj: -16
	I0818 20:12:49.868146   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.368174   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:50.868256   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.368427   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:51.868632   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:52.368585   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:48.182188   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:50.681743   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.683179   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:52.869122   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.368635   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:53.869162   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.368223   73815 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:12:54.490893   73815 kubeadm.go:1113] duration metric: took 4.833865719s to wait for elevateKubeSystemPrivileges
	I0818 20:12:54.490919   73815 kubeadm.go:394] duration metric: took 4m58.032922921s to StartCluster
	I0818 20:12:54.490936   73815 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.491011   73815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:12:54.492769   73815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:12:54.493007   73815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.125 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:12:54.493069   73815 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:12:54.493160   73815 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-291295"
	I0818 20:12:54.493186   73815 addons.go:69] Setting default-storageclass=true in profile "embed-certs-291295"
	I0818 20:12:54.493208   73815 addons.go:69] Setting metrics-server=true in profile "embed-certs-291295"
	I0818 20:12:54.493226   73815 config.go:182] Loaded profile config "embed-certs-291295": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:12:54.493234   73815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-291295"
	I0818 20:12:54.493250   73815 addons.go:234] Setting addon metrics-server=true in "embed-certs-291295"
	W0818 20:12:54.493263   73815 addons.go:243] addon metrics-server should already be in state true
	I0818 20:12:54.493293   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493197   73815 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-291295"
	W0818 20:12:54.493423   73815 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:12:54.493454   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.493667   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493695   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493799   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493824   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.493839   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.493856   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.494988   73815 out.go:177] * Verifying Kubernetes components...
	I0818 20:12:54.496631   73815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0818 20:12:54.510362   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0818 20:12:54.510351   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41877
	I0818 20:12:54.510861   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510893   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.510904   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.511362   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511394   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511392   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511411   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511512   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.511532   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.511721   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511770   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.511858   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.512040   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.512246   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512269   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.512275   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.512287   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.515662   73815 addons.go:234] Setting addon default-storageclass=true in "embed-certs-291295"
	W0818 20:12:54.515684   73815 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:12:54.515713   73815 host.go:66] Checking if "embed-certs-291295" exists ...
	I0818 20:12:54.516066   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.516113   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.532752   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0818 20:12:54.532798   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I0818 20:12:54.533454   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.533570   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.534099   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534122   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534237   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.534256   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.534374   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534590   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.534626   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.534665   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33517
	I0818 20:12:54.534909   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.535373   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.535793   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.535808   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.536326   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.536411   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.536941   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.538860   73815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:12:54.538862   73815 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:12:52.279487   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.279652   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:54.539061   73815 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:12:54.539290   73815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:12:54.540006   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:12:54.540024   73815 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:12:54.540043   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.540104   73815 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.540119   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:12:54.540144   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.543782   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544017   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544131   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544154   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544293   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544491   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.544517   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.544565   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.544734   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.544754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.544887   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.545060   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.545257   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.545502   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.558292   73815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0818 20:12:54.558721   73815 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:12:54.559184   73815 main.go:141] libmachine: Using API Version  1
	I0818 20:12:54.559200   73815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:12:54.559579   73815 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:12:54.559764   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetState
	I0818 20:12:54.561412   73815 main.go:141] libmachine: (embed-certs-291295) Calling .DriverName
	I0818 20:12:54.562138   73815 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.562153   73815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:12:54.562169   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHHostname
	I0818 20:12:54.565078   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565524   73815 main.go:141] libmachine: (embed-certs-291295) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b0:4d:ce", ip: ""} in network mk-embed-certs-291295: {Iface:virbr1 ExpiryTime:2024-08-18 21:07:43 +0000 UTC Type:0 Mac:52:54:00:b0:4d:ce Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:embed-certs-291295 Clientid:01:52:54:00:b0:4d:ce}
	I0818 20:12:54.565543   73815 main.go:141] libmachine: (embed-certs-291295) DBG | domain embed-certs-291295 has defined IP address 192.168.39.125 and MAC address 52:54:00:b0:4d:ce in network mk-embed-certs-291295
	I0818 20:12:54.565782   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHPort
	I0818 20:12:54.565954   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHKeyPath
	I0818 20:12:54.566107   73815 main.go:141] libmachine: (embed-certs-291295) Calling .GetSSHUsername
	I0818 20:12:54.566265   73815 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/embed-certs-291295/id_rsa Username:docker}
	I0818 20:12:54.738286   73815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:12:54.804581   73815 node_ready.go:35] waiting up to 6m0s for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813953   73815 node_ready.go:49] node "embed-certs-291295" has status "Ready":"True"
	I0818 20:12:54.813984   73815 node_ready.go:38] duration metric: took 9.367719ms for node "embed-certs-291295" to be "Ready" ...
	I0818 20:12:54.813995   73815 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:54.820670   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:12:54.884787   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:12:54.884808   73815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:12:54.891500   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:12:54.917894   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:12:54.939854   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:12:54.939873   73815 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:12:55.023663   73815 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:55.023684   73815 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:12:55.049846   73815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:12:56.106099   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.188173933s)
	I0818 20:12:56.106164   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106173   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106502   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106504   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.106519   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.106529   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.106537   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.106774   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.106788   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107412   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.21588373s)
	I0818 20:12:56.107447   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107459   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.107656   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.107729   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.107739   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.107747   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.107754   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.108054   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.108095   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.108105   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.163788   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.163816   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.164087   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.164137   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239269   73815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.189381338s)
	I0818 20:12:56.239327   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239341   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.239712   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.239767   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.239748   73815 main.go:141] libmachine: (embed-certs-291295) DBG | Closing plugin on server side
	I0818 20:12:56.239782   73815 main.go:141] libmachine: Making call to close driver server
	I0818 20:12:56.239792   73815 main.go:141] libmachine: (embed-certs-291295) Calling .Close
	I0818 20:12:56.240000   73815 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:12:56.240017   73815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:12:56.240028   73815 addons.go:475] Verifying addon metrics-server=true in "embed-certs-291295"
	I0818 20:12:56.241750   73815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:12:56.243157   73815 addons.go:510] duration metric: took 1.750082977s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:12:56.827912   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:55.184449   74485 pod_ready.go:103] pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:57.676039   74485 pod_ready.go:82] duration metric: took 4m0.000245975s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" ...
	E0818 20:12:57.676064   74485 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-brqj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0818 20:12:57.676106   74485 pod_ready.go:39] duration metric: took 4m11.533331444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:12:57.676138   74485 kubeadm.go:597] duration metric: took 4m20.628972956s to restartPrimaryControlPlane
	W0818 20:12:57.676203   74485 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0818 20:12:57.676230   74485 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:12:56.778171   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:58.779960   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:12:59.328683   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.331560   73815 pod_ready.go:103] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:01.281134   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.281507   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:03.828543   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.828572   73815 pod_ready.go:82] duration metric: took 9.007869564s for pod "coredns-6f6b679f8f-6785z" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.828586   73815 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833396   73815 pod_ready.go:93] pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.833416   73815 pod_ready.go:82] duration metric: took 4.823533ms for pod "coredns-6f6b679f8f-fx7zv" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.833426   73815 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837837   73815 pod_ready.go:93] pod "etcd-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.837856   73815 pod_ready.go:82] duration metric: took 4.422926ms for pod "etcd-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.837864   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842646   73815 pod_ready.go:93] pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.842666   73815 pod_ready.go:82] duration metric: took 4.795789ms for pod "kube-apiserver-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.842675   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846697   73815 pod_ready.go:93] pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:03.846721   73815 pod_ready.go:82] duration metric: took 4.038999ms for pod "kube-controller-manager-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:03.846733   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224066   73815 pod_ready.go:93] pod "kube-proxy-8mv85" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.224088   73815 pod_ready.go:82] duration metric: took 377.347897ms for pod "kube-proxy-8mv85" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.224097   73815 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624310   73815 pod_ready.go:93] pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:04.624337   73815 pod_ready.go:82] duration metric: took 400.233574ms for pod "kube-scheduler-embed-certs-291295" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:04.624347   73815 pod_ready.go:39] duration metric: took 9.810340936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:04.624363   73815 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:04.624440   73815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:04.640514   73815 api_server.go:72] duration metric: took 10.147475745s to wait for apiserver process to appear ...
	I0818 20:13:04.640543   73815 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:04.640565   73815 api_server.go:253] Checking apiserver healthz at https://192.168.39.125:8443/healthz ...
	I0818 20:13:04.646120   73815 api_server.go:279] https://192.168.39.125:8443/healthz returned 200:
	ok
	I0818 20:13:04.646969   73815 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:04.646989   73815 api_server.go:131] duration metric: took 6.438722ms to wait for apiserver health ...
	I0818 20:13:04.646999   73815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:04.828347   73815 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:04.828385   73815 system_pods.go:61] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:04.828393   73815 system_pods.go:61] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:04.828398   73815 system_pods.go:61] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:04.828403   73815 system_pods.go:61] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:04.828416   73815 system_pods.go:61] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:04.828420   73815 system_pods.go:61] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:04.828425   73815 system_pods.go:61] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:04.828434   73815 system_pods.go:61] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:04.828441   73815 system_pods.go:61] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:04.828453   73815 system_pods.go:74] duration metric: took 181.44906ms to wait for pod list to return data ...
	I0818 20:13:04.828465   73815 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:05.030945   73815 default_sa.go:45] found service account: "default"
	I0818 20:13:05.030971   73815 default_sa.go:55] duration metric: took 202.497269ms for default service account to be created ...
	I0818 20:13:05.030981   73815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:05.226724   73815 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:05.226760   73815 system_pods.go:89] "coredns-6f6b679f8f-6785z" [6e4a0570-184c-4de8-a23d-05cc0409a71f] Running
	I0818 20:13:05.226769   73815 system_pods.go:89] "coredns-6f6b679f8f-fx7zv" [42876c85-5d36-47b3-ba18-2cc7e3edcfd2] Running
	I0818 20:13:05.226775   73815 system_pods.go:89] "etcd-embed-certs-291295" [737f04b6-91e8-495d-8454-8767c09b662a] Running
	I0818 20:13:05.226781   73815 system_pods.go:89] "kube-apiserver-embed-certs-291295" [a9a444c6-925b-44f9-a438-cb08a0e1c6c6] Running
	I0818 20:13:05.226790   73815 system_pods.go:89] "kube-controller-manager-embed-certs-291295" [ba61e389-bf9a-44d9-b9cc-71ab1ae7e655] Running
	I0818 20:13:05.226795   73815 system_pods.go:89] "kube-proxy-8mv85" [f46ec5d3-9303-47c1-b374-b0402d54427d] Running
	I0818 20:13:05.226801   73815 system_pods.go:89] "kube-scheduler-embed-certs-291295" [ed860a7a-6d86-4b54-a05d-af8de0bfabf1] Running
	I0818 20:13:05.226810   73815 system_pods.go:89] "metrics-server-6867b74b74-q9hsn" [91faef36-1509-4f19-8ac7-e72e242d46a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:05.226820   73815 system_pods.go:89] "storage-provisioner" [e89c78dc-0141-45b6-889c-9381599a39e2] Running
	I0818 20:13:05.226831   73815 system_pods.go:126] duration metric: took 195.843628ms to wait for k8s-apps to be running ...
	I0818 20:13:05.226843   73815 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:05.226892   73815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:05.242656   73815 system_svc.go:56] duration metric: took 15.80684ms WaitForService to wait for kubelet
	I0818 20:13:05.242681   73815 kubeadm.go:582] duration metric: took 10.749648174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:05.242698   73815 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:05.424616   73815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:05.424642   73815 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:05.424654   73815 node_conditions.go:105] duration metric: took 181.951421ms to run NodePressure ...
	I0818 20:13:05.424668   73815 start.go:241] waiting for startup goroutines ...
	I0818 20:13:05.424678   73815 start.go:246] waiting for cluster config update ...
	I0818 20:13:05.424692   73815 start.go:255] writing updated cluster config ...
	I0818 20:13:05.425003   73815 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:05.470859   73815 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:05.472909   73815 out.go:177] * Done! kubectl is now configured to use "embed-certs-291295" cluster and "default" namespace by default
	I0818 20:13:05.779555   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:07.783567   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:10.281617   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:12.780570   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:15.282024   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:17.779399   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:23.788389   74485 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.112134895s)
	I0818 20:13:23.788470   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:23.808611   74485 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 20:13:23.820139   74485 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:13:23.837253   74485 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:13:23.837282   74485 kubeadm.go:157] found existing configuration files:
	
	I0818 20:13:23.837345   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0818 20:13:23.848522   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:13:23.848595   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:13:23.857891   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0818 20:13:23.866756   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:13:23.866814   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:13:23.876332   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.885435   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:13:23.885535   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:13:23.896120   74485 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0818 20:13:23.905471   74485 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:13:23.905565   74485 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:13:23.915157   74485 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:13:23.963756   74485 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 20:13:23.963830   74485 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:13:24.083423   74485 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:13:24.083592   74485 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:13:24.083733   74485 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 20:13:24.097967   74485 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:13:24.099859   74485 out.go:235]   - Generating certificates and keys ...
	I0818 20:13:24.099926   74485 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:13:24.100020   74485 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:13:24.100125   74485 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:13:24.100212   74485 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:13:24.100310   74485 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:13:24.100389   74485 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:13:24.100476   74485 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:13:24.100592   74485 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:13:24.100711   74485 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:13:24.100829   74485 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:13:24.100891   74485 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:13:24.100978   74485 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:13:24.298737   74485 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:13:24.592511   74485 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 20:13:24.686316   74485 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:13:24.796124   74485 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:13:24.910646   74485 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:13:24.911060   74485 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:13:24.913486   74485 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:13:20.281479   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:22.779269   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:24.914894   74485 out.go:235]   - Booting up control plane ...
	I0818 20:13:24.915018   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:13:24.915106   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:13:24.915303   74485 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:13:24.938289   74485 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:13:24.944304   74485 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:13:24.944367   74485 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:13:25.078685   74485 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 20:13:25.078813   74485 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 20:13:25.580725   74485 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.092954ms
	I0818 20:13:25.580847   74485 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 20:13:25.280695   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:27.285875   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:29.779058   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:30.583574   74485 kubeadm.go:310] [api-check] The API server is healthy after 5.001121585s
	I0818 20:13:30.596453   74485 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 20:13:30.616459   74485 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 20:13:30.647753   74485 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 20:13:30.648063   74485 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-852598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 20:13:30.661702   74485 kubeadm.go:310] [bootstrap-token] Using token: zx02gp.uvda3nvhhfc3i2l5
	I0818 20:13:30.663166   74485 out.go:235]   - Configuring RBAC rules ...
	I0818 20:13:30.663321   74485 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 20:13:30.671440   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 20:13:30.682462   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 20:13:30.690376   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 20:13:30.699091   74485 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 20:13:30.704304   74485 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 20:13:30.989576   74485 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 20:13:31.435191   74485 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 20:13:31.989155   74485 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 20:13:31.991090   74485 kubeadm.go:310] 
	I0818 20:13:31.991172   74485 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 20:13:31.991188   74485 kubeadm.go:310] 
	I0818 20:13:31.991285   74485 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 20:13:31.991303   74485 kubeadm.go:310] 
	I0818 20:13:31.991337   74485 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 20:13:31.991506   74485 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 20:13:31.991584   74485 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 20:13:31.991605   74485 kubeadm.go:310] 
	I0818 20:13:31.991710   74485 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 20:13:31.991732   74485 kubeadm.go:310] 
	I0818 20:13:31.991802   74485 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 20:13:31.991814   74485 kubeadm.go:310] 
	I0818 20:13:31.991881   74485 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 20:13:31.991986   74485 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 20:13:31.992101   74485 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 20:13:31.992132   74485 kubeadm.go:310] 
	I0818 20:13:31.992250   74485 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 20:13:31.992345   74485 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 20:13:31.992358   74485 kubeadm.go:310] 
	I0818 20:13:31.992464   74485 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.992601   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 \
	I0818 20:13:31.992637   74485 kubeadm.go:310] 	--control-plane 
	I0818 20:13:31.992650   74485 kubeadm.go:310] 
	I0818 20:13:31.992760   74485 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 20:13:31.992778   74485 kubeadm.go:310] 
	I0818 20:13:31.992882   74485 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token zx02gp.uvda3nvhhfc3i2l5 \
	I0818 20:13:31.993030   74485 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1e19cee9d0d002aa95d8427cf2fd48d1f6ed73343ce3c10494ca8902f46491a8 
	I0818 20:13:31.994898   74485 kubeadm.go:310] W0818 20:13:23.918436    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995217   74485 kubeadm.go:310] W0818 20:13:23.919152    2569 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 20:13:31.995365   74485 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:13:31.995413   74485 cni.go:84] Creating CNI manager for ""
	I0818 20:13:31.995423   74485 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 20:13:31.997188   74485 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 20:13:31.998506   74485 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 20:13:32.011472   74485 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 20:13:32.031405   74485 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 20:13:32.031449   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.031494   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-852598 minikube.k8s.io/updated_at=2024_08_18T20_13_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=default-k8s-diff-port-852598 minikube.k8s.io/primary=true
	I0818 20:13:32.244997   74485 ops.go:34] apiserver oom_adj: -16
	I0818 20:13:32.245096   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.745775   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:32.279538   73711 pod_ready.go:103] pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:32.779152   73711 pod_ready.go:82] duration metric: took 4m0.006755386s for pod "metrics-server-6867b74b74-mhhbp" in "kube-system" namespace to be "Ready" ...
	E0818 20:13:32.779180   73711 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 20:13:32.779190   73711 pod_ready.go:39] duration metric: took 4m7.418715902s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:32.779207   73711 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:32.779240   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:32.779298   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:32.848109   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:32.848132   73711 cri.go:89] found id: ""
	I0818 20:13:32.848141   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:32.848201   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.852725   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:32.852789   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:32.899932   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:32.899957   73711 cri.go:89] found id: ""
	I0818 20:13:32.899969   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:32.900028   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.904698   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:32.904771   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:32.945320   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:32.945347   73711 cri.go:89] found id: ""
	I0818 20:13:32.945355   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:32.945411   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.949873   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:32.949935   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:32.986388   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:32.986409   73711 cri.go:89] found id: ""
	I0818 20:13:32.986415   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:32.986465   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:32.992213   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:32.992292   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:33.035535   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:33.035557   73711 cri.go:89] found id: ""
	I0818 20:13:33.035564   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:33.035622   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.039933   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:33.040006   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:33.077372   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:33.077395   73711 cri.go:89] found id: ""
	I0818 20:13:33.077404   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:33.077468   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.082254   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:33.082327   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:33.120142   73711 cri.go:89] found id: ""
	I0818 20:13:33.120181   73711 logs.go:276] 0 containers: []
	W0818 20:13:33.120192   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:33.120199   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:33.120267   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:33.159065   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:33.159089   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.159095   73711 cri.go:89] found id: ""
	I0818 20:13:33.159104   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:33.159164   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.163366   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:33.167301   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:33.167327   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:33.207982   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:33.208012   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:33.734525   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:33.734563   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:33.779286   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:33.779334   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:33.915330   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:33.915365   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:33.930057   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:33.930088   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:33.978282   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:33.978312   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:34.021464   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:34.021495   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:34.058242   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:34.058271   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:34.094203   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:34.094231   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:34.157812   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:34.157849   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:34.196259   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:34.196288   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:34.273774   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:34.273818   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:33.245388   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:33.745166   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.245920   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:34.745548   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.245436   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:35.745269   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.245383   74485 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 20:13:36.384146   74485 kubeadm.go:1113] duration metric: took 4.352781371s to wait for elevateKubeSystemPrivileges
	I0818 20:13:36.384182   74485 kubeadm.go:394] duration metric: took 4m59.395903283s to StartCluster
	I0818 20:13:36.384199   74485 settings.go:142] acquiring lock: {Name:mk9339daeff9135257a996b1957e524e416eb717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.384286   74485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 20:13:36.385964   74485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/kubeconfig: {Name:mkcac9f9744a404d34d51deab0183af951210b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 20:13:36.386201   74485 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.111 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0818 20:13:36.386320   74485 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 20:13:36.386400   74485 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386423   74485 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386440   74485 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-852598"
	I0818 20:13:36.386458   74485 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386470   74485 addons.go:243] addon metrics-server should already be in state true
	I0818 20:13:36.386477   74485 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-852598"
	I0818 20:13:36.386514   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386434   74485 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.386567   74485 addons.go:243] addon storage-provisioner should already be in state true
	I0818 20:13:36.386612   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.386435   74485 config.go:182] Loaded profile config "default-k8s-diff-port-852598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 20:13:36.386858   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386887   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386915   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.386948   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.386982   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.387015   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.387748   74485 out.go:177] * Verifying Kubernetes components...
	I0818 20:13:36.389177   74485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 20:13:36.402895   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0818 20:13:36.402928   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45719
	I0818 20:13:36.403477   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.403479   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404087   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.404111   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404120   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.404519   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404525   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.404795   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.405161   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.405192   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.405739   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45189
	I0818 20:13:36.406246   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.406753   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.406779   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.407167   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.407726   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.407771   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.408687   74485 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-852598"
	W0818 20:13:36.408710   74485 addons.go:243] addon default-storageclass should already be in state true
	I0818 20:13:36.408736   74485 host.go:66] Checking if "default-k8s-diff-port-852598" exists ...
	I0818 20:13:36.409073   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.409120   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.423471   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I0818 20:13:36.423953   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.424569   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.424588   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.424652   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0818 20:13:36.424966   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.425039   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.425257   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.425447   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.425462   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.425911   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.426098   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.427104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.427772   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.428108   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40401
	I0818 20:13:36.428438   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.428794   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.428816   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.429092   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.429645   74485 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/19423-7747/.minikube/bin/docker-machine-driver-kvm2
	I0818 20:13:36.429696   74485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 20:13:36.429708   74485 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 20:13:36.429758   74485 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 20:13:36.431859   74485 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.431879   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 20:13:36.431898   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.431958   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 20:13:36.431969   74485 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 20:13:36.431983   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.435295   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435730   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.435757   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435786   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.435978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436192   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.436238   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.436254   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.436312   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.436528   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.436570   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.436890   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.437171   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.437355   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.447762   74485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36981
	I0818 20:13:36.448303   74485 main.go:141] libmachine: () Calling .GetVersion
	I0818 20:13:36.448694   74485 main.go:141] libmachine: Using API Version  1
	I0818 20:13:36.448713   74485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 20:13:36.449011   74485 main.go:141] libmachine: () Calling .GetMachineName
	I0818 20:13:36.449160   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetState
	I0818 20:13:36.450722   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .DriverName
	I0818 20:13:36.450918   74485 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.450935   74485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 20:13:36.450954   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHHostname
	I0818 20:13:36.453529   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.453969   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:a7:8a", ip: ""} in network mk-default-k8s-diff-port-852598: {Iface:virbr4 ExpiryTime:2024-08-18 21:08:22 +0000 UTC Type:0 Mac:52:54:00:14:a7:8a Iaid: IPaddr:192.168.72.111 Prefix:24 Hostname:default-k8s-diff-port-852598 Clientid:01:52:54:00:14:a7:8a}
	I0818 20:13:36.453992   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | domain default-k8s-diff-port-852598 has defined IP address 192.168.72.111 and MAC address 52:54:00:14:a7:8a in network mk-default-k8s-diff-port-852598
	I0818 20:13:36.454163   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHPort
	I0818 20:13:36.454862   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHKeyPath
	I0818 20:13:36.455104   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .GetSSHUsername
	I0818 20:13:36.455246   74485 sshutil.go:53] new ssh client: &{IP:192.168.72.111 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/default-k8s-diff-port-852598/id_rsa Username:docker}
	I0818 20:13:36.606178   74485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 20:13:36.628852   74485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702927   74485 node_ready.go:49] node "default-k8s-diff-port-852598" has status "Ready":"True"
	I0818 20:13:36.702956   74485 node_ready.go:38] duration metric: took 74.077289ms for node "default-k8s-diff-port-852598" to be "Ready" ...
	I0818 20:13:36.702968   74485 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:36.713446   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:36.726670   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 20:13:36.726689   74485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 20:13:36.741673   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 20:13:36.784451   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 20:13:36.790772   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 20:13:36.790798   74485 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 20:13:36.845289   74485 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:36.845315   74485 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 20:13:36.914259   74485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 20:13:37.542511   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542538   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542559   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542543   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542874   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542914   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542922   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542932   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542935   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.542941   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.542953   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.542963   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.542971   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.542978   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.543114   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.543123   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545016   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.545041   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.545059   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572618   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.572643   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.572953   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.572976   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.572989   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.793891   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.793918   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794436   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) DBG | Closing plugin on server side
	I0818 20:13:37.794453   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794467   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794479   74485 main.go:141] libmachine: Making call to close driver server
	I0818 20:13:37.794487   74485 main.go:141] libmachine: (default-k8s-diff-port-852598) Calling .Close
	I0818 20:13:37.794747   74485 main.go:141] libmachine: Successfully made call to close driver server
	I0818 20:13:37.794762   74485 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 20:13:37.794774   74485 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-852598"
	I0818 20:13:37.796423   74485 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0818 20:13:36.814874   73711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:36.838208   73711 api_server.go:72] duration metric: took 4m18.723396382s to wait for apiserver process to appear ...
	I0818 20:13:36.838234   73711 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:36.838276   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:36.838334   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:36.890010   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:36.890036   73711 cri.go:89] found id: ""
	I0818 20:13:36.890046   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:36.890108   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.895675   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:36.895753   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:36.953110   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:36.953162   73711 cri.go:89] found id: ""
	I0818 20:13:36.953172   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:36.953230   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:36.959359   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:36.959456   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:37.011217   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.011248   73711 cri.go:89] found id: ""
	I0818 20:13:37.011258   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:37.011333   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.016895   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:37.016988   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:37.067705   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:37.067728   73711 cri.go:89] found id: ""
	I0818 20:13:37.067737   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:37.067794   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.073259   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:37.073332   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:37.112192   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.112216   73711 cri.go:89] found id: ""
	I0818 20:13:37.112226   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:37.112285   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.116988   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:37.117060   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:37.153720   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.153744   73711 cri.go:89] found id: ""
	I0818 20:13:37.153753   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:37.153811   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.158160   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:37.158226   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:37.197088   73711 cri.go:89] found id: ""
	I0818 20:13:37.197120   73711 logs.go:276] 0 containers: []
	W0818 20:13:37.197143   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:37.197151   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:37.197215   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:37.241214   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:37.241242   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:37.241248   73711 cri.go:89] found id: ""
	I0818 20:13:37.241257   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:37.241317   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.246159   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:37.250431   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:37.250460   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:37.313787   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:37.313817   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:37.333235   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:37.333263   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:37.461197   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:37.461236   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:37.505314   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:37.505343   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:37.576096   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:37.576121   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:38.083667   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:38.083702   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:38.128922   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:38.128947   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:38.170807   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:38.170842   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:38.265750   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:38.265784   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:38.323224   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:38.323269   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:38.372486   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:38.372530   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:38.413945   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:38.413986   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:37.798152   74485 addons.go:510] duration metric: took 1.411833485s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0818 20:13:38.719805   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.720446   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:40.720472   74485 pod_ready.go:82] duration metric: took 4.00699808s for pod "coredns-6f6b679f8f-fmjdr" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:40.720482   74485 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:42.728159   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:40.955186   73711 api_server.go:253] Checking apiserver healthz at https://192.168.61.228:8443/healthz ...
	I0818 20:13:40.960201   73711 api_server.go:279] https://192.168.61.228:8443/healthz returned 200:
	ok
	I0818 20:13:40.961240   73711 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:40.961260   73711 api_server.go:131] duration metric: took 4.123017717s to wait for apiserver health ...
	I0818 20:13:40.961273   73711 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:40.961298   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:13:40.961350   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:13:41.012093   73711 cri.go:89] found id: "568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.012113   73711 cri.go:89] found id: ""
	I0818 20:13:41.012121   73711 logs.go:276] 1 containers: [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0]
	I0818 20:13:41.012172   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.016282   73711 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:13:41.016337   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:13:41.063834   73711 cri.go:89] found id: "7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.063861   73711 cri.go:89] found id: ""
	I0818 20:13:41.063871   73711 logs.go:276] 1 containers: [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600]
	I0818 20:13:41.063930   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.068645   73711 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:13:41.068724   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:13:41.117544   73711 cri.go:89] found id: "c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:41.117565   73711 cri.go:89] found id: ""
	I0818 20:13:41.117573   73711 logs.go:276] 1 containers: [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb]
	I0818 20:13:41.117626   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.121916   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:13:41.121985   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:13:41.161641   73711 cri.go:89] found id: "38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:41.161660   73711 cri.go:89] found id: ""
	I0818 20:13:41.161667   73711 logs.go:276] 1 containers: [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741]
	I0818 20:13:41.161720   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.165727   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:13:41.165778   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:13:41.207519   73711 cri.go:89] found id: "6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:41.207544   73711 cri.go:89] found id: ""
	I0818 20:13:41.207554   73711 logs.go:276] 1 containers: [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4]
	I0818 20:13:41.207615   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.212114   73711 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:13:41.212171   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:13:41.255480   73711 cri.go:89] found id: "fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:41.255501   73711 cri.go:89] found id: ""
	I0818 20:13:41.255508   73711 logs.go:276] 1 containers: [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df]
	I0818 20:13:41.255560   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.259585   73711 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:13:41.259635   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:13:41.312099   73711 cri.go:89] found id: ""
	I0818 20:13:41.312124   73711 logs.go:276] 0 containers: []
	W0818 20:13:41.312131   73711 logs.go:278] No container was found matching "kindnet"
	I0818 20:13:41.312137   73711 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0818 20:13:41.312201   73711 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 20:13:41.358622   73711 cri.go:89] found id: "3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:41.358647   73711 cri.go:89] found id: "ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.358653   73711 cri.go:89] found id: ""
	I0818 20:13:41.358662   73711 logs.go:276] 2 containers: [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57]
	I0818 20:13:41.358723   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.363210   73711 ssh_runner.go:195] Run: which crictl
	I0818 20:13:41.367271   73711 logs.go:123] Gathering logs for storage-provisioner [ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57] ...
	I0818 20:13:41.367294   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad65c84a94b18da1b1cb76a7538c3c646764101c152648795cac99c852efba57"
	I0818 20:13:41.406329   73711 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:13:41.406355   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:13:41.768140   73711 logs.go:123] Gathering logs for container status ...
	I0818 20:13:41.768175   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 20:13:41.811010   73711 logs.go:123] Gathering logs for kubelet ...
	I0818 20:13:41.811035   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:13:41.886206   73711 logs.go:123] Gathering logs for kube-apiserver [568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0] ...
	I0818 20:13:41.886240   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 568c722ae9e2ff690f3460f1e57420c9ab4d7b69d40501dc1c92d886ee755ec0"
	I0818 20:13:41.938249   73711 logs.go:123] Gathering logs for etcd [7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600] ...
	I0818 20:13:41.938284   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7260b47bfedc9c38c99a94b0634c16b812ffe2e8b04ab657ad2ef54e0956a600"
	I0818 20:13:41.977289   73711 logs.go:123] Gathering logs for coredns [c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb] ...
	I0818 20:13:41.977317   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0a76eb785f5c8f89597e9cd82e49fe0267a8a5c7b2c5b5ef31b93bd9c5ec0cb"
	I0818 20:13:42.018606   73711 logs.go:123] Gathering logs for storage-provisioner [3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132] ...
	I0818 20:13:42.018630   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bb0cae57195cc9f3c0b1169b75fbff2e18f1d95381a05e52ae4f6d9bbf00132"
	I0818 20:13:42.055557   73711 logs.go:123] Gathering logs for dmesg ...
	I0818 20:13:42.055581   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:13:42.070467   73711 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:13:42.070494   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 20:13:42.182068   73711 logs.go:123] Gathering logs for kube-scheduler [38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741] ...
	I0818 20:13:42.182100   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38c187ad4ff354f8b9e3e8ad31c2ff7421bfb50e9b336d90cb72200df56a9741"
	I0818 20:13:42.219346   73711 logs.go:123] Gathering logs for kube-proxy [6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4] ...
	I0818 20:13:42.219373   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d66c800d25d38de5a0a04c2cce96d900efaa1cd57e6f7e2f518239854f39fd4"
	I0818 20:13:42.262193   73711 logs.go:123] Gathering logs for kube-controller-manager [fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df] ...
	I0818 20:13:42.262221   73711 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb1a81f2aed91e4642ca0f13159c9376ae563d3fccffe4f22418773e11c259df"
	I0818 20:13:44.839152   73711 system_pods.go:59] 8 kube-system pods found
	I0818 20:13:44.839181   73711 system_pods.go:61] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.839186   73711 system_pods.go:61] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.839191   73711 system_pods.go:61] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.839194   73711 system_pods.go:61] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.839197   73711 system_pods.go:61] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.839200   73711 system_pods.go:61] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.839206   73711 system_pods.go:61] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.839212   73711 system_pods.go:61] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.839218   73711 system_pods.go:74] duration metric: took 3.877940537s to wait for pod list to return data ...
	I0818 20:13:44.839225   73711 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:44.841877   73711 default_sa.go:45] found service account: "default"
	I0818 20:13:44.841896   73711 default_sa.go:55] duration metric: took 2.662355ms for default service account to be created ...
	I0818 20:13:44.841904   73711 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:44.846214   73711 system_pods.go:86] 8 kube-system pods found
	I0818 20:13:44.846240   73711 system_pods.go:89] "coredns-6f6b679f8f-vqsgw" [0e4e228f-22e6-4b65-a49f-ea58560346a5] Running
	I0818 20:13:44.846247   73711 system_pods.go:89] "etcd-no-preload-944426" [239d26e0-1f64-4eb5-8531-154c8fc2e8fd] Running
	I0818 20:13:44.846252   73711 system_pods.go:89] "kube-apiserver-no-preload-944426" [b87abba5-7386-44c0-ad36-03bdce301002] Running
	I0818 20:13:44.846259   73711 system_pods.go:89] "kube-controller-manager-no-preload-944426" [a1ed765e-7636-4d83-bfad-df9637181c3b] Running
	I0818 20:13:44.846264   73711 system_pods.go:89] "kube-proxy-2l6g8" [ab70884b-4b6b-4ebc-ae54-0b3216dcae47] Running
	I0818 20:13:44.846269   73711 system_pods.go:89] "kube-scheduler-no-preload-944426" [f599b00e-fe4d-4b11-b3e7-31d9142b09b6] Running
	I0818 20:13:44.846279   73711 system_pods.go:89] "metrics-server-6867b74b74-mhhbp" [2541855e-1597-4465-b244-d0d790fe4f6b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:44.846286   73711 system_pods.go:89] "storage-provisioner" [b159448e-15bd-4eb0-bd7f-ddba779588fd] Running
	I0818 20:13:44.846296   73711 system_pods.go:126] duration metric: took 4.386348ms to wait for k8s-apps to be running ...
	I0818 20:13:44.846305   73711 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:44.846356   73711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:44.863225   73711 system_svc.go:56] duration metric: took 16.912117ms WaitForService to wait for kubelet
	I0818 20:13:44.863262   73711 kubeadm.go:582] duration metric: took 4m26.748456958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:44.863287   73711 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:44.866049   73711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:44.866069   73711 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:44.866082   73711 node_conditions.go:105] duration metric: took 2.789471ms to run NodePressure ...
	I0818 20:13:44.866095   73711 start.go:241] waiting for startup goroutines ...
	I0818 20:13:44.866103   73711 start.go:246] waiting for cluster config update ...
	I0818 20:13:44.866135   73711 start.go:255] writing updated cluster config ...
	I0818 20:13:44.866415   73711 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:44.914902   73711 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:44.916929   73711 out.go:177] * Done! kubectl is now configured to use "no-preload-944426" cluster and "default" namespace by default
	I0818 20:13:45.226521   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:47.226773   74485 pod_ready.go:103] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"False"
	I0818 20:13:48.227026   74485 pod_ready.go:93] pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.227050   74485 pod_ready.go:82] duration metric: took 7.506560684s for pod "coredns-6f6b679f8f-xp4z4" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.227061   74485 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231313   74485 pod_ready.go:93] pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.231336   74485 pod_ready.go:82] duration metric: took 4.268255ms for pod "etcd-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.231345   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235228   74485 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.235249   74485 pod_ready.go:82] duration metric: took 3.897729ms for pod "kube-apiserver-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.235259   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238872   74485 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.238889   74485 pod_ready.go:82] duration metric: took 3.623044ms for pod "kube-controller-manager-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.238897   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243264   74485 pod_ready.go:93] pod "kube-proxy-hmvsl" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.243282   74485 pod_ready.go:82] duration metric: took 4.378808ms for pod "kube-proxy-hmvsl" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.243292   74485 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625076   74485 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace has status "Ready":"True"
	I0818 20:13:48.625101   74485 pod_ready.go:82] duration metric: took 381.800619ms for pod "kube-scheduler-default-k8s-diff-port-852598" in "kube-system" namespace to be "Ready" ...
	I0818 20:13:48.625111   74485 pod_ready.go:39] duration metric: took 11.92213071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 20:13:48.625128   74485 api_server.go:52] waiting for apiserver process to appear ...
	I0818 20:13:48.625193   74485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 20:13:48.640038   74485 api_server.go:72] duration metric: took 12.253809178s to wait for apiserver process to appear ...
	I0818 20:13:48.640061   74485 api_server.go:88] waiting for apiserver healthz status ...
	I0818 20:13:48.640081   74485 api_server.go:253] Checking apiserver healthz at https://192.168.72.111:8444/healthz ...
	I0818 20:13:48.644433   74485 api_server.go:279] https://192.168.72.111:8444/healthz returned 200:
	ok
	I0818 20:13:48.645289   74485 api_server.go:141] control plane version: v1.31.0
	I0818 20:13:48.645306   74485 api_server.go:131] duration metric: took 5.239358ms to wait for apiserver health ...
	I0818 20:13:48.645313   74485 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 20:13:48.829655   74485 system_pods.go:59] 9 kube-system pods found
	I0818 20:13:48.829698   74485 system_pods.go:61] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:48.829706   74485 system_pods.go:61] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:48.829718   74485 system_pods.go:61] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:48.829726   74485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:48.829731   74485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:48.829737   74485 system_pods.go:61] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:48.829742   74485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:48.829754   74485 system_pods.go:61] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:48.829760   74485 system_pods.go:61] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:48.829770   74485 system_pods.go:74] duration metric: took 184.451133ms to wait for pod list to return data ...
	I0818 20:13:48.829783   74485 default_sa.go:34] waiting for default service account to be created ...
	I0818 20:13:49.023954   74485 default_sa.go:45] found service account: "default"
	I0818 20:13:49.023982   74485 default_sa.go:55] duration metric: took 194.191689ms for default service account to be created ...
	I0818 20:13:49.023992   74485 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 20:13:49.227864   74485 system_pods.go:86] 9 kube-system pods found
	I0818 20:13:49.227892   74485 system_pods.go:89] "coredns-6f6b679f8f-fmjdr" [b26f1a75-d466-4634-b9da-9505ca282e30] Running
	I0818 20:13:49.227898   74485 system_pods.go:89] "coredns-6f6b679f8f-xp4z4" [6c416478-c540-4b55-9faa-95927e58d9a0] Running
	I0818 20:13:49.227902   74485 system_pods.go:89] "etcd-default-k8s-diff-port-852598" [dae1984d-c95e-4cff-9e32-8d36260c55bd] Running
	I0818 20:13:49.227907   74485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-852598" [bbf9d831-64ed-438c-9c16-b0edf6c584bd] Running
	I0818 20:13:49.227911   74485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-852598" [a3395940-779a-41fc-b9d0-db70d9bf86b5] Running
	I0818 20:13:49.227915   74485 system_pods.go:89] "kube-proxy-hmvsl" [7a577a1d-1e69-4bc2-ba50-c4922fcf58ae] Running
	I0818 20:13:49.227918   74485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-852598" [475950d1-bfd6-4372-9950-a295526e0c62] Running
	I0818 20:13:49.227925   74485 system_pods.go:89] "metrics-server-6867b74b74-gjnsb" [6565c023-a1ba-422e-9e9a-b601dd0419d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 20:13:49.227930   74485 system_pods.go:89] "storage-provisioner" [82be7417-303b-4572-b9c9-1bbd594ed3fe] Running
	I0818 20:13:49.227936   74485 system_pods.go:126] duration metric: took 203.939768ms to wait for k8s-apps to be running ...
	I0818 20:13:49.227945   74485 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 20:13:49.227989   74485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:13:49.242762   74485 system_svc.go:56] duration metric: took 14.808746ms WaitForService to wait for kubelet
	I0818 20:13:49.242793   74485 kubeadm.go:582] duration metric: took 12.856565711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 20:13:49.242819   74485 node_conditions.go:102] verifying NodePressure condition ...
	I0818 20:13:49.425517   74485 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 20:13:49.425543   74485 node_conditions.go:123] node cpu capacity is 2
	I0818 20:13:49.425555   74485 node_conditions.go:105] duration metric: took 182.731125ms to run NodePressure ...
	I0818 20:13:49.425569   74485 start.go:241] waiting for startup goroutines ...
	I0818 20:13:49.425577   74485 start.go:246] waiting for cluster config update ...
	I0818 20:13:49.425588   74485 start.go:255] writing updated cluster config ...
	I0818 20:13:49.425898   74485 ssh_runner.go:195] Run: rm -f paused
	I0818 20:13:49.473176   74485 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 20:13:49.475285   74485 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-852598" cluster and "default" namespace by default
	I0818 20:14:20.304277   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:14:20.304479   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:14:20.306292   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.306452   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:20.306706   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:20.306954   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:20.307330   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:20.307447   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:20.309068   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:20.309159   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:20.309264   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:20.309377   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:20.309433   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:20.309495   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:20.309581   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:20.309673   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:20.309764   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:20.309872   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:20.310001   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:20.310066   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:20.310127   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:20.310177   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:20.310225   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:20.310280   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:20.310330   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:20.310414   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:20.310496   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:20.310537   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:20.310593   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:20.312340   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:20.312457   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:20.312561   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:20.312653   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:20.312746   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:20.312887   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:14:20.312931   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:14:20.313001   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313204   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313267   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313444   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313544   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313750   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.313812   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.313968   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314026   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:14:20.314208   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:14:20.314220   74389 kubeadm.go:310] 
	I0818 20:14:20.314274   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:14:20.314324   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:14:20.314332   74389 kubeadm.go:310] 
	I0818 20:14:20.314366   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:14:20.314400   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:14:20.314494   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:14:20.314501   74389 kubeadm.go:310] 
	I0818 20:14:20.314585   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:14:20.314617   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:14:20.314645   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:14:20.314651   74389 kubeadm.go:310] 
	I0818 20:14:20.314734   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:14:20.314805   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:14:20.314815   74389 kubeadm.go:310] 
	I0818 20:14:20.314910   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:14:20.314983   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:14:20.315050   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:14:20.315118   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:14:20.315139   74389 kubeadm.go:310] 
	W0818 20:14:20.315224   74389 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0818 20:14:20.315257   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0818 20:14:20.802011   74389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 20:14:20.817696   74389 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 20:14:20.828317   74389 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 20:14:20.828343   74389 kubeadm.go:157] found existing configuration files:
	
	I0818 20:14:20.828389   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 20:14:20.837779   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 20:14:20.837828   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 20:14:20.847287   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 20:14:20.856244   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 20:14:20.856297   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 20:14:20.865962   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.875591   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 20:14:20.875636   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 20:14:20.885108   74389 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 20:14:20.895401   74389 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 20:14:20.895448   74389 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 20:14:20.905313   74389 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0818 20:14:20.980568   74389 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0818 20:14:20.980634   74389 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 20:14:21.141985   74389 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 20:14:21.142125   74389 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 20:14:21.142214   74389 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0818 20:14:21.319304   74389 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 20:14:21.321018   74389 out.go:235]   - Generating certificates and keys ...
	I0818 20:14:21.321103   74389 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 20:14:21.321167   74389 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 20:14:21.321273   74389 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0818 20:14:21.321324   74389 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0818 20:14:21.321412   74389 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0818 20:14:21.321518   74389 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0818 20:14:21.322294   74389 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0818 20:14:21.323367   74389 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0818 20:14:21.324408   74389 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0818 20:14:21.325380   74389 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0818 20:14:21.325588   74389 kubeadm.go:310] [certs] Using the existing "sa" key
	I0818 20:14:21.325680   74389 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 20:14:21.488448   74389 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 20:14:21.932438   74389 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 20:14:22.057714   74389 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 20:14:22.225927   74389 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 20:14:22.247513   74389 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 20:14:22.248599   74389 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 20:14:22.248689   74389 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 20:14:22.401404   74389 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 20:14:22.403079   74389 out.go:235]   - Booting up control plane ...
	I0818 20:14:22.403225   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 20:14:22.410231   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 20:14:22.411546   74389 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 20:14:22.412596   74389 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 20:14:22.417412   74389 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0818 20:15:02.419506   74389 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0818 20:15:02.419690   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:02.419892   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:07.420517   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:07.420725   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:17.421285   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:17.421489   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:15:37.421720   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:15:37.421929   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421247   74389 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0818 20:16:17.421466   74389 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0818 20:16:17.421493   74389 kubeadm.go:310] 
	I0818 20:16:17.421544   74389 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0818 20:16:17.421603   74389 kubeadm.go:310] 		timed out waiting for the condition
	I0818 20:16:17.421614   74389 kubeadm.go:310] 
	I0818 20:16:17.421713   74389 kubeadm.go:310] 	This error is likely caused by:
	I0818 20:16:17.421783   74389 kubeadm.go:310] 		- The kubelet is not running
	I0818 20:16:17.421940   74389 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0818 20:16:17.421954   74389 kubeadm.go:310] 
	I0818 20:16:17.422102   74389 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0818 20:16:17.422151   74389 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0818 20:16:17.422209   74389 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0818 20:16:17.422226   74389 kubeadm.go:310] 
	I0818 20:16:17.422322   74389 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0818 20:16:17.422430   74389 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0818 20:16:17.422440   74389 kubeadm.go:310] 
	I0818 20:16:17.422582   74389 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0818 20:16:17.422717   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0818 20:16:17.422825   74389 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0818 20:16:17.422929   74389 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0818 20:16:17.422940   74389 kubeadm.go:310] 
	I0818 20:16:17.423354   74389 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 20:16:17.423494   74389 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0818 20:16:17.423603   74389 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0818 20:16:17.423681   74389 kubeadm.go:394] duration metric: took 7m58.537542772s to StartCluster
	I0818 20:16:17.423729   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0818 20:16:17.423784   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 20:16:17.469886   74389 cri.go:89] found id: ""
	I0818 20:16:17.469914   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.469922   74389 logs.go:278] No container was found matching "kube-apiserver"
	I0818 20:16:17.469928   74389 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0818 20:16:17.469981   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 20:16:17.507038   74389 cri.go:89] found id: ""
	I0818 20:16:17.507066   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.507074   74389 logs.go:278] No container was found matching "etcd"
	I0818 20:16:17.507079   74389 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0818 20:16:17.507139   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 20:16:17.540610   74389 cri.go:89] found id: ""
	I0818 20:16:17.540642   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.540652   74389 logs.go:278] No container was found matching "coredns"
	I0818 20:16:17.540659   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0818 20:16:17.540716   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 20:16:17.575992   74389 cri.go:89] found id: ""
	I0818 20:16:17.576017   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.576027   74389 logs.go:278] No container was found matching "kube-scheduler"
	I0818 20:16:17.576035   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0818 20:16:17.576101   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 20:16:17.613137   74389 cri.go:89] found id: ""
	I0818 20:16:17.613169   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.613180   74389 logs.go:278] No container was found matching "kube-proxy"
	I0818 20:16:17.613187   74389 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 20:16:17.613246   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 20:16:17.649272   74389 cri.go:89] found id: ""
	I0818 20:16:17.649294   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.649302   74389 logs.go:278] No container was found matching "kube-controller-manager"
	I0818 20:16:17.649307   74389 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0818 20:16:17.649366   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 20:16:17.684358   74389 cri.go:89] found id: ""
	I0818 20:16:17.684382   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.684390   74389 logs.go:278] No container was found matching "kindnet"
	I0818 20:16:17.684395   74389 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 20:16:17.684444   74389 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 20:16:17.719075   74389 cri.go:89] found id: ""
	I0818 20:16:17.719098   74389 logs.go:276] 0 containers: []
	W0818 20:16:17.719109   74389 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0818 20:16:17.719121   74389 logs.go:123] Gathering logs for kubelet ...
	I0818 20:16:17.719135   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0818 20:16:17.781919   74389 logs.go:123] Gathering logs for dmesg ...
	I0818 20:16:17.781949   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 20:16:17.798574   74389 logs.go:123] Gathering logs for describe nodes ...
	I0818 20:16:17.798614   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0818 20:16:17.880159   74389 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0818 20:16:17.880184   74389 logs.go:123] Gathering logs for CRI-O ...
	I0818 20:16:17.880209   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0818 20:16:17.993015   74389 logs.go:123] Gathering logs for container status ...
	I0818 20:16:17.993052   74389 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0818 20:16:18.078876   74389 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0818 20:16:18.078928   74389 out.go:270] * 
	W0818 20:16:18.079007   74389 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.079025   74389 out.go:270] * 
	W0818 20:16:18.079989   74389 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 20:16:18.083231   74389 out.go:201] 
	W0818 20:16:18.084528   74389 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0818 20:16:18.084571   74389 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0818 20:16:18.084598   74389 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0818 20:16:18.086023   74389 out.go:201] 
	
	
	==> CRI-O <==
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.347646551Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012879347611583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12a32fa1-334d-487a-bdd5-b861471418fd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.348106118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa65a9b7-d625-4e67-9fd6-632feb3ccfb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.348160024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa65a9b7-d625-4e67-9fd6-632feb3ccfb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.348196190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aa65a9b7-d625-4e67-9fd6-632feb3ccfb2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.383220234Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31505d6b-5f11-4c35-b10c-cd9e36d61ba6 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.383316206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31505d6b-5f11-4c35-b10c-cd9e36d61ba6 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.384430269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c83636f-8ecb-4ef2-a966-608c33e9f603 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.384921317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012879384836112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c83636f-8ecb-4ef2-a966-608c33e9f603 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.385359778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c85a90ca-d6a9-43c3-8d35-f34fa2ebdfa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.385410782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c85a90ca-d6a9-43c3-8d35-f34fa2ebdfa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.385442743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c85a90ca-d6a9-43c3-8d35-f34fa2ebdfa2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.419292539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47a42400-8f8b-454b-9ba2-20398be6251c name=/runtime.v1.RuntimeService/Version
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.419388670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47a42400-8f8b-454b-9ba2-20398be6251c name=/runtime.v1.RuntimeService/Version
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.420737624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0931b28e-5eaa-482f-bbd8-6a3bd4e290f5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.421196443Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012879421148850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0931b28e-5eaa-482f-bbd8-6a3bd4e290f5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.421732728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fcc3ec3-73d1-43d3-b9ca-17eb7117686a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.421778045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fcc3ec3-73d1-43d3-b9ca-17eb7117686a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.421813790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2fcc3ec3-73d1-43d3-b9ca-17eb7117686a name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.459683108Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3afeb3b6-4416-418f-9189-c815e3d82106 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.459792334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3afeb3b6-4416-418f-9189-c815e3d82106 name=/runtime.v1.RuntimeService/Version
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.461014526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c9c22d0-a148-48d1-a819-c3606b2540f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.461624455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724012879461597219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c9c22d0-a148-48d1-a819-c3606b2540f6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.462323346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fedd9ef-d814-4eaa-8977-ad4dc81bd57c name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.462418837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fedd9ef-d814-4eaa-8977-ad4dc81bd57c name=/runtime.v1.RuntimeService/ListContainers
	Aug 18 20:27:59 old-k8s-version-247539 crio[653]: time="2024-08-18 20:27:59.462512836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3fedd9ef-d814-4eaa-8977-ad4dc81bd57c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug18 20:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051405] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041581] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.935576] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug18 20:08] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.637295] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.911494] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.071095] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080090] systemd-fstab-generator[583]: Ignoring "noauto" option for root device
	[  +0.174365] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.151707] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.249665] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +6.351764] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +0.067129] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.161515] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[ +12.130980] kauditd_printk_skb: 46 callbacks suppressed
	[Aug18 20:12] systemd-fstab-generator[5096]: Ignoring "noauto" option for root device
	[Aug18 20:14] systemd-fstab-generator[5379]: Ignoring "noauto" option for root device
	[  +0.062456] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:27:59 up 20 min,  0 users,  load average: 0.32, 0.11, 0.05
	Linux old-k8s-version-247539 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: created by k8s.io/kubernetes/pkg/kubelet/config.newSourceApiserverFromLW
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47 +0x1e5
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: goroutine 128 [runnable]:
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run(0xc0008dbb80, 0xc0000a60c0)
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:368
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/informers.(*sharedInformerFactory).Start
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:134 +0x191
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: goroutine 129 [runnable]:
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001dec40, 0xc0000a60c0)
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: goroutine 146 [syscall]:
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: syscall.Syscall6(0xe8, 0xc, 0xc000a8fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc000a8fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc00039e540, 0x0, 0x0, 0x0)
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0000502d0)
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Aug 18 20:27:59 old-k8s-version-247539 kubelet[6916]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Aug 18 20:27:59 old-k8s-version-247539 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 18 20:27:59 old-k8s-version-247539 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 2 (225.523513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-247539" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (155.98s)

                                                
                                    

Test pass (242/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 53.39
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 19.07
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 88.17
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 140.04
31 TestAddons/serial/GCPAuth/Namespaces 0.14
33 TestAddons/parallel/Registry 18.74
35 TestAddons/parallel/InspektorGadget 10.76
37 TestAddons/parallel/HelmTiller 12.27
39 TestAddons/parallel/CSI 69.33
40 TestAddons/parallel/Headlamp 19.1
41 TestAddons/parallel/CloudSpanner 5.98
42 TestAddons/parallel/LocalPath 55.66
43 TestAddons/parallel/NvidiaDevicePlugin 6.73
44 TestAddons/parallel/Yakd 12.31
46 TestCertOptions 49.71
47 TestCertExpiration 276.91
49 TestForceSystemdFlag 82.68
50 TestForceSystemdEnv 72.15
52 TestKVMDriverInstallOrUpdate 8.59
56 TestErrorSpam/setup 44.54
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.71
59 TestErrorSpam/pause 1.54
60 TestErrorSpam/unpause 1.82
61 TestErrorSpam/stop 5.51
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 86.94
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.94
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.24
73 TestFunctional/serial/CacheCmd/cache/add_local 2.25
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 29.58
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.39
84 TestFunctional/serial/LogsFileCmd 1.41
85 TestFunctional/serial/InvalidService 4.1
87 TestFunctional/parallel/ConfigCmd 0.33
88 TestFunctional/parallel/DashboardCmd 30.31
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.19
91 TestFunctional/parallel/StatusCmd 1.5
95 TestFunctional/parallel/ServiceCmdConnect 11.59
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 48.9
99 TestFunctional/parallel/SSHCmd 0.41
100 TestFunctional/parallel/CpCmd 1.3
101 TestFunctional/parallel/MySQL 24.64
102 TestFunctional/parallel/FileSync 0.19
103 TestFunctional/parallel/CertSync 1.49
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
111 TestFunctional/parallel/License 0.63
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
124 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
126 TestFunctional/parallel/ProfileCmd/profile_list 0.32
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
128 TestFunctional/parallel/MountCmd/any-port 8.69
129 TestFunctional/parallel/ServiceCmd/List 0.33
130 TestFunctional/parallel/MountCmd/specific-port 2.07
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
133 TestFunctional/parallel/ServiceCmd/Format 0.41
134 TestFunctional/parallel/ServiceCmd/URL 0.69
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.34
140 TestFunctional/parallel/ImageCommands/ImageBuild 6.78
141 TestFunctional/parallel/ImageCommands/Setup 1.93
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.91
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.95
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.15
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.86
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
149 TestFunctional/parallel/Version/short 0.05
150 TestFunctional/parallel/Version/components 0.93
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 246.61
158 TestMultiControlPlane/serial/DeployApp 6.43
159 TestMultiControlPlane/serial/PingHostFromPods 1.18
160 TestMultiControlPlane/serial/AddWorkerNode 57.78
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.46
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.83
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/RestartCluster 328.73
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 78.29
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 55.22
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.68
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.62
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.36
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.18
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 86.02
211 TestMountStart/serial/StartWithMountFirst 27.17
212 TestMountStart/serial/VerifyMountFirst 0.35
213 TestMountStart/serial/StartWithMountSecond 27.63
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.66
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.29
221 TestMultiNode/serial/FreshStart2Nodes 115.4
222 TestMultiNode/serial/DeployApp2Nodes 5.26
223 TestMultiNode/serial/PingHostFrom2Pods 0.78
224 TestMultiNode/serial/AddNode 48.76
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.22
227 TestMultiNode/serial/CopyFile 7.03
228 TestMultiNode/serial/StopNode 2.28
229 TestMultiNode/serial/StartAfterStop 39.76
231 TestMultiNode/serial/DeleteNode 2.36
233 TestMultiNode/serial/RestartMultiNode 188.29
234 TestMultiNode/serial/ValidateNameConflict 43.75
241 TestScheduledStopUnix 113.42
245 TestRunningBinaryUpgrade 235.95
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
251 TestNoKubernetes/serial/StartWithK8s 95.29
259 TestNetworkPlugins/group/false 2.98
263 TestNoKubernetes/serial/StartWithStopK8s 67.71
264 TestNoKubernetes/serial/Start 27.62
266 TestPause/serial/Start 90.61
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
268 TestNoKubernetes/serial/ProfileList 24.91
269 TestNoKubernetes/serial/Stop 2.16
270 TestNoKubernetes/serial/StartNoArgs 22.07
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
280 TestStoppedBinaryUpgrade/Setup 2.62
281 TestStoppedBinaryUpgrade/Upgrade 101.83
282 TestNetworkPlugins/group/auto/Start 99.09
283 TestNetworkPlugins/group/kindnet/Start 92.7
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
285 TestNetworkPlugins/group/calico/Start 106.1
286 TestNetworkPlugins/group/auto/KubeletFlags 0.2
287 TestNetworkPlugins/group/auto/NetCatPod 11.21
288 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
289 TestNetworkPlugins/group/auto/DNS 0.17
290 TestNetworkPlugins/group/auto/Localhost 0.17
291 TestNetworkPlugins/group/auto/HairPin 0.14
292 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
293 TestNetworkPlugins/group/kindnet/NetCatPod 11.23
294 TestNetworkPlugins/group/kindnet/DNS 0.19
295 TestNetworkPlugins/group/kindnet/Localhost 0.15
296 TestNetworkPlugins/group/kindnet/HairPin 0.15
297 TestNetworkPlugins/group/custom-flannel/Start 71.24
298 TestNetworkPlugins/group/enable-default-cni/Start 101.6
299 TestNetworkPlugins/group/calico/ControllerPod 6.01
300 TestNetworkPlugins/group/calico/KubeletFlags 0.2
301 TestNetworkPlugins/group/calico/NetCatPod 11.24
302 TestNetworkPlugins/group/calico/DNS 0.17
303 TestNetworkPlugins/group/calico/Localhost 0.15
304 TestNetworkPlugins/group/calico/HairPin 0.14
305 TestNetworkPlugins/group/flannel/Start 84.07
306 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
307 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
308 TestNetworkPlugins/group/custom-flannel/DNS 0.24
309 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
310 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
311 TestNetworkPlugins/group/bridge/Start 60.56
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
314 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
315 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
316 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
317 TestNetworkPlugins/group/flannel/ControllerPod 6.01
320 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
321 TestNetworkPlugins/group/flannel/NetCatPod 12.34
322 TestNetworkPlugins/group/flannel/DNS 0.23
323 TestNetworkPlugins/group/flannel/Localhost 0.2
324 TestNetworkPlugins/group/flannel/HairPin 0.18
325 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
326 TestNetworkPlugins/group/bridge/NetCatPod 11.23
328 TestStartStop/group/no-preload/serial/FirstStart 109.72
329 TestNetworkPlugins/group/bridge/DNS 0.16
330 TestNetworkPlugins/group/bridge/Localhost 0.14
331 TestNetworkPlugins/group/bridge/HairPin 0.13
333 TestStartStop/group/embed-certs/serial/FirstStart 112.17
335 TestStartStop/group/newest-cni/serial/FirstStart 86.07
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
338 TestStartStop/group/newest-cni/serial/Stop 11.34
339 TestStartStop/group/no-preload/serial/DeployApp 10.28
340 TestStartStop/group/embed-certs/serial/DeployApp 10.29
341 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
343 TestStartStop/group/newest-cni/serial/SecondStart 38.43
345 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
350 TestStartStop/group/newest-cni/serial/Pause 4.36
352 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.44
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
360 TestStartStop/group/no-preload/serial/SecondStart 649.95
361 TestStartStop/group/embed-certs/serial/SecondStart 603.08
363 TestStartStop/group/old-k8s-version/serial/Stop 4.29
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
366 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 547
x
+
TestDownloadOnly/v1.20.0/json-events (53.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-664260 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-664260 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (53.39187471s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (53.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-664260
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-664260: exit status 85 (55.171638ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-664260 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |          |
	|         | -p download-only-664260        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:38:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:38:09.678169   14946 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:38:09.678288   14946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:09.678298   14946 out.go:358] Setting ErrFile to fd 2...
	I0818 18:38:09.678302   14946 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:09.678475   14946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	W0818 18:38:09.678614   14946 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19423-7747/.minikube/config/config.json: open /home/jenkins/minikube-integration/19423-7747/.minikube/config/config.json: no such file or directory
	I0818 18:38:09.679200   14946 out.go:352] Setting JSON to true
	I0818 18:38:09.680106   14946 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1234,"bootTime":1724005056,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:38:09.680157   14946 start.go:139] virtualization: kvm guest
	I0818 18:38:09.682655   14946 out.go:97] [download-only-664260] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0818 18:38:09.682750   14946 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball: no such file or directory
	I0818 18:38:09.682796   14946 notify.go:220] Checking for updates...
	I0818 18:38:09.684115   14946 out.go:169] MINIKUBE_LOCATION=19423
	I0818 18:38:09.685444   14946 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:38:09.686617   14946 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:38:09.687910   14946 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:38:09.689126   14946 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0818 18:38:09.691634   14946 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 18:38:09.691852   14946 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:38:09.792814   14946 out.go:97] Using the kvm2 driver based on user configuration
	I0818 18:38:09.792839   14946 start.go:297] selected driver: kvm2
	I0818 18:38:09.792845   14946 start.go:901] validating driver "kvm2" against <nil>
	I0818 18:38:09.793145   14946 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:38:09.793261   14946 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 18:38:09.807520   14946 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 18:38:09.807587   14946 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:38:09.808072   14946 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0818 18:38:09.808236   14946 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 18:38:09.808313   14946 cni.go:84] Creating CNI manager for ""
	I0818 18:38:09.808329   14946 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 18:38:09.808338   14946 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 18:38:09.808412   14946 start.go:340] cluster config:
	{Name:download-only-664260 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-664260 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:38:09.808602   14946 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:38:09.810335   14946 out.go:97] Downloading VM boot image ...
	I0818 18:38:09.810370   14946 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19423-7747/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:38:20.153962   14946 out.go:97] Starting "download-only-664260" primary control-plane node in "download-only-664260" cluster
	I0818 18:38:20.153987   14946 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 18:38:20.263066   14946 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0818 18:38:20.263099   14946 cache.go:56] Caching tarball of preloaded images
	I0818 18:38:20.263273   14946 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 18:38:20.265201   14946 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0818 18:38:20.265215   14946 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0818 18:38:20.378067   14946 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0818 18:38:33.608439   14946 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0818 18:38:33.608526   14946 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0818 18:38:34.506259   14946 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0818 18:38:34.506591   14946 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/download-only-664260/config.json ...
	I0818 18:38:34.506620   14946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/download-only-664260/config.json: {Name:mk7660b99689d51e2e32de0bc14ecea4611eef9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:34.506789   14946 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0818 18:38:34.507006   14946 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19423-7747/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-664260 host does not exist
	  To start a cluster, run: "minikube start -p download-only-664260"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-664260
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (19.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-371992 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-371992 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (19.074046955s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (19.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-371992
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-371992: exit status 85 (56.948527ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-664260 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | -p download-only-664260        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC | 18 Aug 24 18:39 UTC |
	| delete  | -p download-only-664260        | download-only-664260 | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC | 18 Aug 24 18:39 UTC |
	| start   | -o=json --download-only        | download-only-371992 | jenkins | v1.33.1 | 18 Aug 24 18:39 UTC |                     |
	|         | -p download-only-371992        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:39:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:39:03.401490   15300 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:39:03.401712   15300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:39:03.401719   15300 out.go:358] Setting ErrFile to fd 2...
	I0818 18:39:03.401723   15300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:39:03.401896   15300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 18:39:03.402422   15300 out.go:352] Setting JSON to true
	I0818 18:39:03.403269   15300 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1287,"bootTime":1724005056,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:39:03.403333   15300 start.go:139] virtualization: kvm guest
	I0818 18:39:03.405630   15300 out.go:97] [download-only-371992] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 18:39:03.405773   15300 notify.go:220] Checking for updates...
	I0818 18:39:03.407128   15300 out.go:169] MINIKUBE_LOCATION=19423
	I0818 18:39:03.408543   15300 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:39:03.410067   15300 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:39:03.411363   15300 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:39:03.412684   15300 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0818 18:39:03.415277   15300 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 18:39:03.415541   15300 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:39:03.448534   15300 out.go:97] Using the kvm2 driver based on user configuration
	I0818 18:39:03.448556   15300 start.go:297] selected driver: kvm2
	I0818 18:39:03.448567   15300 start.go:901] validating driver "kvm2" against <nil>
	I0818 18:39:03.448888   15300 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:39:03.448982   15300 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-7747/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 18:39:03.464493   15300 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 18:39:03.464561   15300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:39:03.465053   15300 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0818 18:39:03.465208   15300 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 18:39:03.465290   15300 cni.go:84] Creating CNI manager for ""
	I0818 18:39:03.465303   15300 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0818 18:39:03.465310   15300 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0818 18:39:03.465380   15300 start.go:340] cluster config:
	{Name:download-only-371992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-371992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:39:03.465488   15300 iso.go:125] acquiring lock: {Name:mk9201a26af135372f8a85ea726fe0c576f878b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:39:03.467348   15300 out.go:97] Starting "download-only-371992" primary control-plane node in "download-only-371992" cluster
	I0818 18:39:03.467361   15300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:39:03.580773   15300 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0818 18:39:03.580798   15300 cache.go:56] Caching tarball of preloaded images
	I0818 18:39:03.580947   15300 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0818 18:39:03.582932   15300 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0818 18:39:03.582953   15300 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0818 18:39:03.695370   15300 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19423-7747/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-371992 host does not exist
	  To start a cluster, run: "minikube start -p download-only-371992"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-371992
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-128446 --alsologtostderr --binary-mirror http://127.0.0.1:41387 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-128446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-128446
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (88.17s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-277219 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-277219 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.956689822s)
helpers_test.go:175: Cleaning up "offline-crio-277219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-277219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-277219: (1.20848028s)
--- PASS: TestOffline (88.17s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-483094
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-483094: exit status 85 (48.662588ms)

                                                
                                                
-- stdout --
	* Profile "addons-483094" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-483094"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-483094
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-483094: exit status 85 (46.12201ms)

                                                
                                                
-- stdout --
	* Profile "addons-483094" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-483094"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (140.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-483094 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-483094 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m20.036192723s)
--- PASS: TestAddons/Setup (140.04s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-483094 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-483094 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.878289ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-dgwqw" [067b7646-ddf6-4f0b-bc5b-f1f0f7886c10] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003893574s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8h2l6" [6562d7a2-f7f9-476f-9b02-fd1cf7d752f3] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004079647s
addons_test.go:342: (dbg) Run:  kubectl --context addons-483094 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-483094 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-483094 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.538123742s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 ip
2024/08/18 18:42:25 [DEBUG] GET http://192.168.39.116:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-linux-amd64 -p addons-483094 addons disable registry --alsologtostderr -v=1: (1.023890889s)
--- PASS: TestAddons/parallel/Registry (18.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zx5jn" [a574d0c7-8cf3-4e5d-9055-07edfedf8fe0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005630402s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-483094
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-483094: (5.750708237s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.27s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.014446ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-84wz4" [14ad1b2b-905b-495b-a83a-4e89d1a1c04f] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004283532s
addons_test.go:475: (dbg) Run:  kubectl --context addons-483094 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-483094 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.669119624s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.27s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.537048ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-483094 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-483094 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5515158e-34ab-4873-bef0-e03211b03375] Pending
helpers_test.go:344: "task-pv-pod" [5515158e-34ab-4873-bef0-e03211b03375] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5515158e-34ab-4873-bef0-e03211b03375] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.00425126s
addons_test.go:590: (dbg) Run:  kubectl --context addons-483094 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-483094 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-483094 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-483094 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-483094 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-483094 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-483094 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e88e256f-fa3e-4a98-9c5e-a5adf444d077] Pending
helpers_test.go:344: "task-pv-pod-restore" [e88e256f-fa3e-4a98-9c5e-a5adf444d077] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e88e256f-fa3e-4a98-9c5e-a5adf444d077] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004089013s
addons_test.go:632: (dbg) Run:  kubectl --context addons-483094 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-483094 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-483094 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-483094 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.69971878s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-483094 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-483094 --alsologtostderr -v=1: (1.240412474s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-nmnpf" [b1c8ee4e-747b-4281-856a-813f180a1e6d] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-nmnpf" [b1c8ee4e-747b-4281-856a-813f180a1e6d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-nmnpf" [b1c8ee4e-747b-4281-856a-813f180a1e6d] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005234563s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-483094 addons disable headlamp --alsologtostderr -v=1: (5.855085118s)
--- PASS: TestAddons/parallel/Headlamp (19.10s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-szwrv" [c2a2f025-da4f-4021-83fe-c5aa8357fd22] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.014335881s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-483094
--- PASS: TestAddons/parallel/CloudSpanner (5.98s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.66s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-483094 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-483094 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-483094 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4b38e65f-391b-48c7-a8e1-41d53682f501] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4b38e65f-391b-48c7-a8e1-41d53682f501] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4b38e65f-391b-48c7-a8e1-41d53682f501] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003586685s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-483094 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 ssh "cat /opt/local-path-provisioner/pvc-512d0e6d-7527-4406-847a-81e42c2ab4b4_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-483094 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-483094 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-483094 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.851233046s)
--- PASS: TestAddons/parallel/LocalPath (55.66s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-tvfnx" [a01a3329-cdbd-44ec-b8a3-6bc065c8505a] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004752705s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-483094
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-n2dxp" [3d4281b0-9ac1-445e-992e-e163320882c0] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004523543s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-483094 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-483094 addons disable yakd --alsologtostderr -v=1: (6.305668048s)
--- PASS: TestAddons/parallel/Yakd (12.31s)

                                                
                                    
x
+
TestCertOptions (49.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-272048 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-272048 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (48.275530948s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-272048 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-272048 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-272048 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-272048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-272048
--- PASS: TestCertOptions (49.71s)

                                                
                                    
x
+
TestCertExpiration (276.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-735899 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-735899 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (56.976286009s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-735899 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-735899 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (38.950091723s)
helpers_test.go:175: Cleaning up "cert-expiration-735899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-735899
--- PASS: TestCertExpiration (276.91s)

                                                
                                    
x
+
TestForceSystemdFlag (82.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-433596 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0818 19:49:09.713973   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-433596 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.915145506s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-433596 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-433596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-433596
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-433596: (1.537419092s)
--- PASS: TestForceSystemdFlag (82.68s)

                                                
                                    
x
+
TestForceSystemdEnv (72.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-293103 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-293103 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.185398353s)
helpers_test.go:175: Cleaning up "force-systemd-env-293103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-293103
--- PASS: TestForceSystemdEnv (72.15s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.59s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.59s)

                                                
                                    
x
+
TestErrorSpam/setup (44.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-304235 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-304235 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-304235 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-304235 --driver=kvm2  --container-runtime=crio: (44.543773297s)
--- PASS: TestErrorSpam/setup (44.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (5.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 stop: (2.274279423s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 stop: (1.924797924s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-304235 --log_dir /tmp/nospam-304235 stop: (1.305649112s)
--- PASS: TestErrorSpam/stop (5.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19423-7747/.minikube/files/etc/test/nested/copy/14934/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159278 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0818 18:51:44.019338   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:44.026150   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:44.037545   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:44.058935   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:44.100333   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:44.181871   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:44.343451   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:44.665214   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:45.307280   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:46.588936   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:49.150926   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:54.272317   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:52:04.513801   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:52:24.995796   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-159278 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m26.944091467s)
--- PASS: TestFunctional/serial/StartWithProxy (86.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159278 --alsologtostderr -v=8
E0818 18:53:05.957785   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-159278 --alsologtostderr -v=8: (37.935161127s)
functional_test.go:663: soft start took 37.935887332s for "functional-159278" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-159278 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 cache add registry.k8s.io/pause:3.1: (1.000862384s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 cache add registry.k8s.io/pause:3.3: (1.15796632s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 cache add registry.k8s.io/pause:latest: (1.079832744s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-159278 /tmp/TestFunctionalserialCacheCmdcacheadd_local3924335468/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 cache add minikube-local-cache-test:functional-159278
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 cache add minikube-local-cache-test:functional-159278: (1.933073218s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 cache delete minikube-local-cache-test:functional-159278
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-159278
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159278 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.840873ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 kubectl -- --context functional-159278 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-159278 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159278 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-159278 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.582633534s)
functional_test.go:761: restart took 29.582741576s for "functional-159278" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-159278 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 logs: (1.390059521s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 logs --file /tmp/TestFunctionalserialLogsFileCmd1826282677/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 logs --file /tmp/TestFunctionalserialLogsFileCmd1826282677/001/logs.txt: (1.410488365s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-159278 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-159278
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-159278: exit status 115 (265.571527ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.189:30195 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-159278 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159278 config get cpus: exit status 14 (52.46813ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159278 config get cpus: exit status 14 (56.864501ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-159278 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-159278 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24480: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-159278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.609807ms)

                                                
                                                
-- stdout --
	* [functional-159278] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 18:54:39.370030   24057 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:54:39.370283   24057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:54:39.370293   24057 out.go:358] Setting ErrFile to fd 2...
	I0818 18:54:39.370297   24057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:54:39.370501   24057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 18:54:39.371081   24057 out.go:352] Setting JSON to false
	I0818 18:54:39.372101   24057 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2223,"bootTime":1724005056,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:54:39.372159   24057 start.go:139] virtualization: kvm guest
	I0818 18:54:39.374263   24057 out.go:177] * [functional-159278] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 18:54:39.376082   24057 notify.go:220] Checking for updates...
	I0818 18:54:39.376087   24057 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:54:39.377544   24057 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:54:39.378875   24057 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:54:39.380219   24057 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:54:39.381251   24057 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 18:54:39.382296   24057 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:54:39.383754   24057 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:54:39.384177   24057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:54:39.384220   24057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:39.400054   24057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0818 18:54:39.400471   24057 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:39.401039   24057 main.go:141] libmachine: Using API Version  1
	I0818 18:54:39.401063   24057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:39.401469   24057 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:39.401625   24057 main.go:141] libmachine: (functional-159278) Calling .DriverName
	I0818 18:54:39.401835   24057 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:54:39.402119   24057 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:54:39.402149   24057 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:39.416842   24057 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38433
	I0818 18:54:39.417230   24057 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:39.417685   24057 main.go:141] libmachine: Using API Version  1
	I0818 18:54:39.417719   24057 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:39.418016   24057 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:39.418246   24057 main.go:141] libmachine: (functional-159278) Calling .DriverName
	I0818 18:54:39.451451   24057 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 18:54:39.452686   24057 start.go:297] selected driver: kvm2
	I0818 18:54:39.452712   24057 start.go:901] validating driver "kvm2" against &{Name:functional-159278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-159278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:54:39.452852   24057 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:54:39.455478   24057 out.go:201] 
	W0818 18:54:39.456780   24057 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0818 18:54:39.457982   24057 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159278 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-159278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-159278 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (185.813299ms)

                                                
                                                
-- stdout --
	* [functional-159278] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 18:54:39.200640   23929 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:54:39.200850   23929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:54:39.200877   23929 out.go:358] Setting ErrFile to fd 2...
	I0818 18:54:39.200893   23929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:54:39.201379   23929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 18:54:39.202157   23929 out.go:352] Setting JSON to false
	I0818 18:54:39.203589   23929 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2223,"bootTime":1724005056,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:54:39.203705   23929 start.go:139] virtualization: kvm guest
	I0818 18:54:39.206141   23929 out.go:177] * [functional-159278] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0818 18:54:39.207736   23929 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:54:39.207810   23929 notify.go:220] Checking for updates...
	I0818 18:54:39.210221   23929 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:54:39.211999   23929 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 18:54:39.213235   23929 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 18:54:39.214576   23929 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 18:54:39.215814   23929 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:54:39.217511   23929 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 18:54:39.218077   23929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:54:39.218117   23929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:39.252235   23929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40885
	I0818 18:54:39.252742   23929 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:39.253330   23929 main.go:141] libmachine: Using API Version  1
	I0818 18:54:39.253351   23929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:39.253661   23929 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:39.253820   23929 main.go:141] libmachine: (functional-159278) Calling .DriverName
	I0818 18:54:39.254079   23929 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:54:39.254523   23929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 18:54:39.254557   23929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:39.272201   23929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0818 18:54:39.272613   23929 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:39.273271   23929 main.go:141] libmachine: Using API Version  1
	I0818 18:54:39.273303   23929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:39.273680   23929 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:39.273838   23929 main.go:141] libmachine: (functional-159278) Calling .DriverName
	I0818 18:54:39.311660   23929 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0818 18:54:39.312852   23929 start.go:297] selected driver: kvm2
	I0818 18:54:39.312865   23929 start.go:901] validating driver "kvm2" against &{Name:functional-159278 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-159278 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.189 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:54:39.312953   23929 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:54:39.315030   23929 out.go:201] 
	W0818 18:54:39.316132   23929 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0818 18:54:39.317459   23929 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-159278 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-159278 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9q46h" [ab9e8656-e8a7-48a7-9ec3-bdea6c9a0ca7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9q46h" [ab9e8656-e8a7-48a7-9ec3-bdea6c9a0ca7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003301949s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.189:32037
functional_test.go:1675: http://192.168.39.189:32037: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-9q46h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.189:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.189:32037
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0a040d4b-7878-48f6-bbed-8f05ec807881] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005135146s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-159278 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-159278 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-159278 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-159278 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [68fc0abb-6cdb-4c83-98ec-8c81e4f98dc6] Pending
helpers_test.go:344: "sp-pod" [68fc0abb-6cdb-4c83-98ec-8c81e4f98dc6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [68fc0abb-6cdb-4c83-98ec-8c81e4f98dc6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.00346789s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-159278 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-159278 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-159278 delete -f testdata/storage-provisioner/pod.yaml: (1.13481454s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-159278 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0e587785-3f4a-42ea-a8d7-2adeaabe5014] Pending
helpers_test.go:344: "sp-pod" [0e587785-3f4a-42ea-a8d7-2adeaabe5014] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0e587785-3f4a-42ea-a8d7-2adeaabe5014] Running
2024/08/18 18:55:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.004274732s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-159278 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh -n functional-159278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 cp functional-159278:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1200298554/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh -n functional-159278 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh -n functional-159278 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-159278 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-vrxhv" [844a087d-edbe-4ffa-833b-327aba0ed2f8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-vrxhv" [844a087d-edbe-4ffa-833b-327aba0ed2f8] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.011825911s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-159278 exec mysql-6cdb49bbb-vrxhv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-159278 exec mysql-6cdb49bbb-vrxhv -- mysql -ppassword -e "show databases;": exit status 1 (391.916472ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-159278 exec mysql-6cdb49bbb-vrxhv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-159278 exec mysql-6cdb49bbb-vrxhv -- mysql -ppassword -e "show databases;": exit status 1 (148.766125ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-159278 exec mysql-6cdb49bbb-vrxhv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14934/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo cat /etc/test/nested/copy/14934/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14934.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo cat /etc/ssl/certs/14934.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14934.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo cat /usr/share/ca-certificates/14934.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/149342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo cat /etc/ssl/certs/149342.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/149342.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo cat /usr/share/ca-certificates/149342.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-159278 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159278 ssh "sudo systemctl is-active docker": exit status 1 (225.970861ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159278 ssh "sudo systemctl is-active containerd": exit status 1 (219.73398ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-159278 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-159278 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-5mjwb" [7fce948f-f6d4-49be-8ed9-0c7140371e43] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-5mjwb" [7fce948f-f6d4-49be-8ed9-0c7140371e43] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004606188s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
E0818 18:54:27.879478   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1315: Took "268.220153ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.066828ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "245.809167ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "42.966133ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdany-port705760926/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724007268413419070" to /tmp/TestFunctionalparallelMountCmdany-port705760926/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724007268413419070" to /tmp/TestFunctionalparallelMountCmdany-port705760926/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724007268413419070" to /tmp/TestFunctionalparallelMountCmdany-port705760926/001/test-1724007268413419070
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.345001ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 18 18:54 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 18 18:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 18 18:54 test-1724007268413419070
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh cat /mount-9p/test-1724007268413419070
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-159278 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [97c6ab6f-16d3-4959-839c-d0ee7582fb28] Pending
helpers_test.go:344: "busybox-mount" [97c6ab6f-16d3-4959-839c-d0ee7582fb28] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [97c6ab6f-16d3-4959-839c-d0ee7582fb28] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [97c6ab6f-16d3-4959-839c-d0ee7582fb28] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004323387s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-159278 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdany-port705760926/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdspecific-port930190163/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.167217ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdspecific-port930190163/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdspecific-port930190163/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 service list -o json
functional_test.go:1494: Took "316.404328ms" to run "out/minikube-linux-amd64 -p functional-159278 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.189:30193
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.189:30193
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2262837338/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2262837338/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2262837338/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T" /mount1: exit status 1 (314.426529ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-159278 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2262837338/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2262837338/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-159278 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2262837338/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159278 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-159278
localhost/kicbase/echo-server:functional-159278
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159278 image ls --format short --alsologtostderr:
I0818 18:55:00.863435   25037 out.go:345] Setting OutFile to fd 1 ...
I0818 18:55:00.863699   25037 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:00.863708   25037 out.go:358] Setting ErrFile to fd 2...
I0818 18:55:00.863712   25037 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:00.863890   25037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
I0818 18:55:00.864560   25037 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:00.864733   25037 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:00.865310   25037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:00.865366   25037 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:00.879909   25037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40829
I0818 18:55:00.880428   25037 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:00.881025   25037 main.go:141] libmachine: Using API Version  1
I0818 18:55:00.881048   25037 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:00.881392   25037 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:00.881579   25037 main.go:141] libmachine: (functional-159278) Calling .GetState
I0818 18:55:00.883404   25037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:00.883450   25037 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:00.897654   25037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44445
I0818 18:55:00.898096   25037 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:00.898600   25037 main.go:141] libmachine: Using API Version  1
I0818 18:55:00.898620   25037 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:00.898938   25037 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:00.899122   25037 main.go:141] libmachine: (functional-159278) Calling .DriverName
I0818 18:55:00.899325   25037 ssh_runner.go:195] Run: systemctl --version
I0818 18:55:00.899356   25037 main.go:141] libmachine: (functional-159278) Calling .GetSSHHostname
I0818 18:55:00.902177   25037 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:00.902602   25037 main.go:141] libmachine: (functional-159278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:81:83", ip: ""} in network mk-functional-159278: {Iface:virbr1 ExpiryTime:2024-08-18 19:51:51 +0000 UTC Type:0 Mac:52:54:00:ab:81:83 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-159278 Clientid:01:52:54:00:ab:81:83}
I0818 18:55:00.902631   25037 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined IP address 192.168.39.189 and MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:00.902725   25037 main.go:141] libmachine: (functional-159278) Calling .GetSSHPort
I0818 18:55:00.902884   25037 main.go:141] libmachine: (functional-159278) Calling .GetSSHKeyPath
I0818 18:55:00.903018   25037 main.go:141] libmachine: (functional-159278) Calling .GetSSHUsername
I0818 18:55:00.903139   25037 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/functional-159278/id_rsa Username:docker}
I0818 18:55:00.986928   25037 ssh_runner.go:195] Run: sudo crictl images --output json
I0818 18:55:01.142765   25037 main.go:141] libmachine: Making call to close driver server
I0818 18:55:01.142781   25037 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:01.143100   25037 main.go:141] libmachine: (functional-159278) DBG | Closing plugin on server side
I0818 18:55:01.143136   25037 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:01.143146   25037 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:55:01.143160   25037 main.go:141] libmachine: Making call to close driver server
I0818 18:55:01.143181   25037 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:01.143392   25037 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:01.143480   25037 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:55:01.143425   25037 main.go:141] libmachine: (functional-159278) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159278 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-159278  | 59a7519be075e | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| localhost/kicbase/echo-server           | functional-159278  | 9056ab77afb8e | 4.94MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159278 image ls --format table --alsologtostderr:
I0818 18:55:05.487660   25209 out.go:345] Setting OutFile to fd 1 ...
I0818 18:55:05.487781   25209 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:05.487791   25209 out.go:358] Setting ErrFile to fd 2...
I0818 18:55:05.487797   25209 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:05.488065   25209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
I0818 18:55:05.488831   25209 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:05.488982   25209 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:05.489545   25209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:05.489603   25209 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:05.504777   25209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42793
I0818 18:55:05.505255   25209 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:05.505965   25209 main.go:141] libmachine: Using API Version  1
I0818 18:55:05.506001   25209 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:05.506357   25209 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:05.506562   25209 main.go:141] libmachine: (functional-159278) Calling .GetState
I0818 18:55:05.508772   25209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:05.508812   25209 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:05.523481   25209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45323
I0818 18:55:05.523905   25209 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:05.524391   25209 main.go:141] libmachine: Using API Version  1
I0818 18:55:05.524420   25209 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:05.524796   25209 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:05.525043   25209 main.go:141] libmachine: (functional-159278) Calling .DriverName
I0818 18:55:05.525237   25209 ssh_runner.go:195] Run: systemctl --version
I0818 18:55:05.525267   25209 main.go:141] libmachine: (functional-159278) Calling .GetSSHHostname
I0818 18:55:05.528230   25209 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:05.528594   25209 main.go:141] libmachine: (functional-159278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:81:83", ip: ""} in network mk-functional-159278: {Iface:virbr1 ExpiryTime:2024-08-18 19:51:51 +0000 UTC Type:0 Mac:52:54:00:ab:81:83 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-159278 Clientid:01:52:54:00:ab:81:83}
I0818 18:55:05.528632   25209 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined IP address 192.168.39.189 and MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:05.528807   25209 main.go:141] libmachine: (functional-159278) Calling .GetSSHPort
I0818 18:55:05.529003   25209 main.go:141] libmachine: (functional-159278) Calling .GetSSHKeyPath
I0818 18:55:05.529160   25209 main.go:141] libmachine: (functional-159278) Calling .GetSSHUsername
I0818 18:55:05.529296   25209 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/functional-159278/id_rsa Username:docker}
I0818 18:55:05.656001   25209 ssh_runner.go:195] Run: sudo crictl images --output json
I0818 18:55:05.732943   25209 main.go:141] libmachine: Making call to close driver server
I0818 18:55:05.732963   25209 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:05.733301   25209 main.go:141] libmachine: (functional-159278) DBG | Closing plugin on server side
I0818 18:55:05.733304   25209 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:05.733338   25209 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:55:05.733348   25209 main.go:141] libmachine: Making call to close driver server
I0818 18:55:05.733358   25209 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:05.733610   25209 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:05.733618   25209 main.go:141] libmachine: (functional-159278) DBG | Closing plugin on server side
I0818 18:55:05.733627   25209 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159278 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"59a7519be075ec389ef541bc6439cd955d12b21912dcb068d51392414f7b5130","repoDigests":["localhost/minikube-local-cache-test@sha256:2ccbe9c38099abd93f6ce13bddcc12ad3e9271eed7ad8f335edc7a87929a9e6d"],"repoTags":["localhost/minikube-local-cache-test:functional-159278"],"size":"3330"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009
664"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repo
Tags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20
240730-75a5af0c"],"size":"87165492"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3
c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-159278"],"size":"4943877"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-p
roxy:v1.31.0"],"size":"92728217"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256
:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159278 image ls --format json --alsologtostderr:
I0818 18:55:05.163580   25185 out.go:345] Setting OutFile to fd 1 ...
I0818 18:55:05.163825   25185 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:05.163833   25185 out.go:358] Setting ErrFile to fd 2...
I0818 18:55:05.163837   25185 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:05.164000   25185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
I0818 18:55:05.164543   25185 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:05.164658   25185 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:05.164999   25185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:05.165043   25185 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:05.179752   25185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33561
I0818 18:55:05.180273   25185 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:05.180922   25185 main.go:141] libmachine: Using API Version  1
I0818 18:55:05.180948   25185 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:05.181362   25185 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:05.181578   25185 main.go:141] libmachine: (functional-159278) Calling .GetState
I0818 18:55:05.183610   25185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:05.183660   25185 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:05.198599   25185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36007
I0818 18:55:05.199014   25185 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:05.199543   25185 main.go:141] libmachine: Using API Version  1
I0818 18:55:05.199564   25185 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:05.200034   25185 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:05.200267   25185 main.go:141] libmachine: (functional-159278) Calling .DriverName
I0818 18:55:05.200516   25185 ssh_runner.go:195] Run: systemctl --version
I0818 18:55:05.200536   25185 main.go:141] libmachine: (functional-159278) Calling .GetSSHHostname
I0818 18:55:05.203717   25185 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:05.204150   25185 main.go:141] libmachine: (functional-159278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:81:83", ip: ""} in network mk-functional-159278: {Iface:virbr1 ExpiryTime:2024-08-18 19:51:51 +0000 UTC Type:0 Mac:52:54:00:ab:81:83 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-159278 Clientid:01:52:54:00:ab:81:83}
I0818 18:55:05.204193   25185 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined IP address 192.168.39.189 and MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:05.204398   25185 main.go:141] libmachine: (functional-159278) Calling .GetSSHPort
I0818 18:55:05.204638   25185 main.go:141] libmachine: (functional-159278) Calling .GetSSHKeyPath
I0818 18:55:05.204826   25185 main.go:141] libmachine: (functional-159278) Calling .GetSSHUsername
I0818 18:55:05.205033   25185 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/functional-159278/id_rsa Username:docker}
I0818 18:55:05.338810   25185 ssh_runner.go:195] Run: sudo crictl images --output json
I0818 18:55:05.430517   25185 main.go:141] libmachine: Making call to close driver server
I0818 18:55:05.430533   25185 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:05.430789   25185 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:05.430808   25185 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:55:05.430817   25185 main.go:141] libmachine: Making call to close driver server
I0818 18:55:05.430824   25185 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:05.431200   25185 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:05.431264   25185 main.go:141] libmachine: (functional-159278) DBG | Closing plugin on server side
I0818 18:55:05.431291   25185 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159278 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 59a7519be075ec389ef541bc6439cd955d12b21912dcb068d51392414f7b5130
repoDigests:
- localhost/minikube-local-cache-test@sha256:2ccbe9c38099abd93f6ce13bddcc12ad3e9271eed7ad8f335edc7a87929a9e6d
repoTags:
- localhost/minikube-local-cache-test:functional-159278
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-159278
size: "4943877"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159278 image ls --format yaml --alsologtostderr:
I0818 18:55:01.192488   25062 out.go:345] Setting OutFile to fd 1 ...
I0818 18:55:01.192598   25062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:01.192606   25062 out.go:358] Setting ErrFile to fd 2...
I0818 18:55:01.192610   25062 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:01.192824   25062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
I0818 18:55:01.193347   25062 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:01.193442   25062 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:01.193795   25062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:01.193832   25062 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:01.209343   25062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
I0818 18:55:01.209814   25062 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:01.210497   25062 main.go:141] libmachine: Using API Version  1
I0818 18:55:01.210517   25062 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:01.210898   25062 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:01.211115   25062 main.go:141] libmachine: (functional-159278) Calling .GetState
I0818 18:55:01.213321   25062 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:01.213371   25062 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:01.228512   25062 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
I0818 18:55:01.228905   25062 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:01.229497   25062 main.go:141] libmachine: Using API Version  1
I0818 18:55:01.229572   25062 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:01.229916   25062 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:01.230128   25062 main.go:141] libmachine: (functional-159278) Calling .DriverName
I0818 18:55:01.230359   25062 ssh_runner.go:195] Run: systemctl --version
I0818 18:55:01.230388   25062 main.go:141] libmachine: (functional-159278) Calling .GetSSHHostname
I0818 18:55:01.233429   25062 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:01.233815   25062 main.go:141] libmachine: (functional-159278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:81:83", ip: ""} in network mk-functional-159278: {Iface:virbr1 ExpiryTime:2024-08-18 19:51:51 +0000 UTC Type:0 Mac:52:54:00:ab:81:83 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-159278 Clientid:01:52:54:00:ab:81:83}
I0818 18:55:01.233843   25062 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined IP address 192.168.39.189 and MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:01.234025   25062 main.go:141] libmachine: (functional-159278) Calling .GetSSHPort
I0818 18:55:01.234220   25062 main.go:141] libmachine: (functional-159278) Calling .GetSSHKeyPath
I0818 18:55:01.234400   25062 main.go:141] libmachine: (functional-159278) Calling .GetSSHUsername
I0818 18:55:01.234577   25062 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/functional-159278/id_rsa Username:docker}
I0818 18:55:01.381201   25062 ssh_runner.go:195] Run: sudo crictl images --output json
I0818 18:55:01.478642   25062 main.go:141] libmachine: Making call to close driver server
I0818 18:55:01.478668   25062 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:01.478962   25062 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:01.478981   25062 main.go:141] libmachine: (functional-159278) DBG | Closing plugin on server side
I0818 18:55:01.478996   25062 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:55:01.479033   25062 main.go:141] libmachine: Making call to close driver server
I0818 18:55:01.479045   25062 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:01.479249   25062 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:01.479262   25062 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:55:01.479290   25062 main.go:141] libmachine: (functional-159278) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-159278 ssh pgrep buildkitd: exit status 1 (233.922708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image build -t localhost/my-image:functional-159278 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 image build -t localhost/my-image:functional-159278 testdata/build --alsologtostderr: (6.323229228s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-159278 image build -t localhost/my-image:functional-159278 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b84f2cf0bed
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-159278
--> b3cf65d5c86
Successfully tagged localhost/my-image:functional-159278
b3cf65d5c860d141424cc9ecfa867fb6aa7fb6c48849d4eaee93037e86411448
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-159278 image build -t localhost/my-image:functional-159278 testdata/build --alsologtostderr:
I0818 18:55:01.767340   25130 out.go:345] Setting OutFile to fd 1 ...
I0818 18:55:01.767635   25130 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:01.767646   25130 out.go:358] Setting ErrFile to fd 2...
I0818 18:55:01.767650   25130 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:55:01.767859   25130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
I0818 18:55:01.768433   25130 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:01.768966   25130 config.go:182] Loaded profile config "functional-159278": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0818 18:55:01.769356   25130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:01.769410   25130 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:01.785149   25130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
I0818 18:55:01.785730   25130 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:01.786360   25130 main.go:141] libmachine: Using API Version  1
I0818 18:55:01.786407   25130 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:01.786852   25130 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:01.787054   25130 main.go:141] libmachine: (functional-159278) Calling .GetState
I0818 18:55:01.789023   25130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0818 18:55:01.789069   25130 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:55:01.805263   25130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45993
I0818 18:55:01.805725   25130 main.go:141] libmachine: () Calling .GetVersion
I0818 18:55:01.806332   25130 main.go:141] libmachine: Using API Version  1
I0818 18:55:01.806361   25130 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:55:01.806736   25130 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:55:01.806930   25130 main.go:141] libmachine: (functional-159278) Calling .DriverName
I0818 18:55:01.807174   25130 ssh_runner.go:195] Run: systemctl --version
I0818 18:55:01.807207   25130 main.go:141] libmachine: (functional-159278) Calling .GetSSHHostname
I0818 18:55:01.810186   25130 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:01.810580   25130 main.go:141] libmachine: (functional-159278) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:81:83", ip: ""} in network mk-functional-159278: {Iface:virbr1 ExpiryTime:2024-08-18 19:51:51 +0000 UTC Type:0 Mac:52:54:00:ab:81:83 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:functional-159278 Clientid:01:52:54:00:ab:81:83}
I0818 18:55:01.810638   25130 main.go:141] libmachine: (functional-159278) DBG | domain functional-159278 has defined IP address 192.168.39.189 and MAC address 52:54:00:ab:81:83 in network mk-functional-159278
I0818 18:55:01.810755   25130 main.go:141] libmachine: (functional-159278) Calling .GetSSHPort
I0818 18:55:01.810935   25130 main.go:141] libmachine: (functional-159278) Calling .GetSSHKeyPath
I0818 18:55:01.811149   25130 main.go:141] libmachine: (functional-159278) Calling .GetSSHUsername
I0818 18:55:01.811320   25130 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/functional-159278/id_rsa Username:docker}
I0818 18:55:01.931859   25130 build_images.go:161] Building image from path: /tmp/build.3397762485.tar
I0818 18:55:01.931934   25130 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0818 18:55:01.964862   25130 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3397762485.tar
I0818 18:55:01.982903   25130 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3397762485.tar: stat -c "%s %y" /var/lib/minikube/build/build.3397762485.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3397762485.tar': No such file or directory
I0818 18:55:01.982936   25130 ssh_runner.go:362] scp /tmp/build.3397762485.tar --> /var/lib/minikube/build/build.3397762485.tar (3072 bytes)
I0818 18:55:02.084512   25130 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3397762485
I0818 18:55:02.118817   25130 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3397762485 -xf /var/lib/minikube/build/build.3397762485.tar
I0818 18:55:02.154735   25130 crio.go:315] Building image: /var/lib/minikube/build/build.3397762485
I0818 18:55:02.154822   25130 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-159278 /var/lib/minikube/build/build.3397762485 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0818 18:55:08.014544   25130 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-159278 /var/lib/minikube/build/build.3397762485 --cgroup-manager=cgroupfs: (5.85968274s)
I0818 18:55:08.014618   25130 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3397762485
I0818 18:55:08.027042   25130 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3397762485.tar
I0818 18:55:08.037924   25130 build_images.go:217] Built localhost/my-image:functional-159278 from /tmp/build.3397762485.tar
I0818 18:55:08.037956   25130 build_images.go:133] succeeded building to: functional-159278
I0818 18:55:08.037960   25130 build_images.go:134] failed building to: 
I0818 18:55:08.037983   25130 main.go:141] libmachine: Making call to close driver server
I0818 18:55:08.037993   25130 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:08.038345   25130 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:08.038351   25130 main.go:141] libmachine: (functional-159278) DBG | Closing plugin on server side
I0818 18:55:08.038371   25130 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:55:08.038386   25130 main.go:141] libmachine: Making call to close driver server
I0818 18:55:08.038395   25130 main.go:141] libmachine: (functional-159278) Calling .Close
I0818 18:55:08.038632   25130 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:55:08.038647   25130 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:55:08.038674   25130 main.go:141] libmachine: (functional-159278) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.907664496s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-159278
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image load --daemon kicbase/echo-server:functional-159278 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 image load --daemon kicbase/echo-server:functional-159278 --alsologtostderr: (5.626352361s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.046712978s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-159278
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image load --daemon kicbase/echo-server:functional-159278 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image save kicbase/echo-server:functional-159278 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-159278 image save kicbase/echo-server:functional-159278 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.151600205s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image rm kicbase/echo-server:functional-159278 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-159278
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 image save --daemon kicbase/echo-server:functional-159278 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-159278
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-159278 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-159278
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-159278
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-159278
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (246.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-189125 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0818 18:56:44.019302   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:57:11.722744   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-189125 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m5.950653336s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (246.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- rollout status deployment/busybox
E0818 18:59:26.647600   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:26.653969   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:26.665376   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:26.686763   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:26.728153   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:26.809616   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:26.971058   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:27.292566   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:27.934200   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-189125 -- rollout status deployment/busybox: (4.373617517s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-8bwfj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-fvdcn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-kxdwj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-8bwfj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-fvdcn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-kxdwj -- nslookup kubernetes.default
E0818 18:59:29.215861   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-8bwfj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-fvdcn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-kxdwj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-8bwfj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-8bwfj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-fvdcn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-fvdcn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-kxdwj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-189125 -- exec busybox-7dff88458-kxdwj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-189125 -v=7 --alsologtostderr
E0818 18:59:31.777335   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:36.899448   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:47.140880   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:00:07.623014   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-189125 -v=7 --alsologtostderr: (56.961217261s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-189125 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp testdata/cp-test.txt ha-189125:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3256308944/001/cp-test_ha-189125.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125:/home/docker/cp-test.txt ha-189125-m02:/home/docker/cp-test_ha-189125_ha-189125-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m02 "sudo cat /home/docker/cp-test_ha-189125_ha-189125-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125:/home/docker/cp-test.txt ha-189125-m03:/home/docker/cp-test_ha-189125_ha-189125-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m03 "sudo cat /home/docker/cp-test_ha-189125_ha-189125-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125:/home/docker/cp-test.txt ha-189125-m04:/home/docker/cp-test_ha-189125_ha-189125-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m04 "sudo cat /home/docker/cp-test_ha-189125_ha-189125-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp testdata/cp-test.txt ha-189125-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3256308944/001/cp-test_ha-189125-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m02:/home/docker/cp-test.txt ha-189125:/home/docker/cp-test_ha-189125-m02_ha-189125.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125 "sudo cat /home/docker/cp-test_ha-189125-m02_ha-189125.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m02:/home/docker/cp-test.txt ha-189125-m03:/home/docker/cp-test_ha-189125-m02_ha-189125-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m03 "sudo cat /home/docker/cp-test_ha-189125-m02_ha-189125-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m02:/home/docker/cp-test.txt ha-189125-m04:/home/docker/cp-test_ha-189125-m02_ha-189125-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m04 "sudo cat /home/docker/cp-test_ha-189125-m02_ha-189125-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp testdata/cp-test.txt ha-189125-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3256308944/001/cp-test_ha-189125-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt ha-189125:/home/docker/cp-test_ha-189125-m03_ha-189125.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125 "sudo cat /home/docker/cp-test_ha-189125-m03_ha-189125.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt ha-189125-m02:/home/docker/cp-test_ha-189125-m03_ha-189125-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m02 "sudo cat /home/docker/cp-test_ha-189125-m03_ha-189125-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m03:/home/docker/cp-test.txt ha-189125-m04:/home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m04 "sudo cat /home/docker/cp-test_ha-189125-m03_ha-189125-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp testdata/cp-test.txt ha-189125-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3256308944/001/cp-test_ha-189125-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt ha-189125:/home/docker/cp-test_ha-189125-m04_ha-189125.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125 "sudo cat /home/docker/cp-test_ha-189125-m04_ha-189125.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt ha-189125-m02:/home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m02 "sudo cat /home/docker/cp-test_ha-189125-m04_ha-189125-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 cp ha-189125-m04:/home/docker/cp-test.txt ha-189125-m03:/home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 ssh -n ha-189125-m03 "sudo cat /home/docker/cp-test_ha-189125-m04_ha-189125-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.480168373s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-189125 node delete m03 -v=7 --alsologtostderr: (16.085972216s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (328.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-189125 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0818 19:14:26.648381   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:15:49.710051   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:16:44.018695   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-189125 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m27.970511465s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (328.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-189125 --control-plane -v=7 --alsologtostderr
E0818 19:19:26.647617   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-189125 --control-plane -v=7 --alsologtostderr: (1m17.47792177s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-189125 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-364130 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-364130 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.214629963s)
--- PASS: TestJSONOutput/start/Command (55.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-364130 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-364130 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-364130 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-364130 --output=json --user=testUser: (7.363617745s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-279213 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-279213 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.274874ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3aa49a47-0696-4aa6-9ba0-8538d1ac9eb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-279213] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e6a7efd-ca47-45cd-a036-de82fe4c72b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"4a03917f-b460-4b6a-b898-4a3ed05cbc31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b1394aad-2cd2-47bd-8d19-d146fe5b00b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig"}}
	{"specversion":"1.0","id":"c440fa93-ad18-479d-b950-be2f82f7c250","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube"}}
	{"specversion":"1.0","id":"61bc8198-c5a6-4969-a6ea-dc0fd8f94305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2d792c98-d9f1-4a6d-9492-dfba96ed786b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8f14217b-cf50-4fec-b845-9dbda2b5aa42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-279213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-279213
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (86.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-158995 --driver=kvm2  --container-runtime=crio
E0818 19:21:44.019188   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-158995 --driver=kvm2  --container-runtime=crio: (42.666013349s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-162569 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-162569 --driver=kvm2  --container-runtime=crio: (40.73308905s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-158995
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-162569
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-162569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-162569
helpers_test.go:175: Cleaning up "first-158995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-158995
--- PASS: TestMinikubeProfile (86.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-371134 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-371134 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.167624435s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-371134 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-371134 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-387803 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-387803 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.627693502s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-387803 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-387803 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-371134 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-387803 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-387803 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-387803
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-387803: (1.286880802s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048993 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0818 19:24:26.646630   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:24:47.086040   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048993 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.007012086s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-048993 -- rollout status deployment/busybox: (3.821585495s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-7frzh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-nxm4b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-7frzh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-nxm4b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-7frzh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-nxm4b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-7frzh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-7frzh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-nxm4b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-048993 -- exec busybox-7dff88458-nxm4b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-048993 -v 3 --alsologtostderr
E0818 19:26:44.018593   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-048993 -v 3 --alsologtostderr: (48.197865201s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-048993 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp testdata/cp-test.txt multinode-048993:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp multinode-048993:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1791348439/001/cp-test_multinode-048993.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp multinode-048993:/home/docker/cp-test.txt multinode-048993-m02:/home/docker/cp-test_multinode-048993_multinode-048993-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m02 "sudo cat /home/docker/cp-test_multinode-048993_multinode-048993-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp multinode-048993:/home/docker/cp-test.txt multinode-048993-m03:/home/docker/cp-test_multinode-048993_multinode-048993-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m03 "sudo cat /home/docker/cp-test_multinode-048993_multinode-048993-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp testdata/cp-test.txt multinode-048993-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp multinode-048993-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1791348439/001/cp-test_multinode-048993-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp multinode-048993-m02:/home/docker/cp-test.txt multinode-048993:/home/docker/cp-test_multinode-048993-m02_multinode-048993.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993 "sudo cat /home/docker/cp-test_multinode-048993-m02_multinode-048993.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp multinode-048993-m02:/home/docker/cp-test.txt multinode-048993-m03:/home/docker/cp-test_multinode-048993-m02_multinode-048993-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m03 "sudo cat /home/docker/cp-test_multinode-048993-m02_multinode-048993-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp testdata/cp-test.txt multinode-048993-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp multinode-048993-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1791348439/001/cp-test_multinode-048993-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp multinode-048993-m03:/home/docker/cp-test.txt multinode-048993:/home/docker/cp-test_multinode-048993-m03_multinode-048993.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993 "sudo cat /home/docker/cp-test_multinode-048993-m03_multinode-048993.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 cp multinode-048993-m03:/home/docker/cp-test.txt multinode-048993-m02:/home/docker/cp-test_multinode-048993-m03_multinode-048993-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 ssh -n multinode-048993-m02 "sudo cat /home/docker/cp-test_multinode-048993-m03_multinode-048993-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-048993 node stop m03: (1.45074498s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048993 status: exit status 7 (411.568724ms)

                                                
                                                
-- stdout --
	multinode-048993
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048993-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048993-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-048993 status --alsologtostderr: exit status 7 (415.070714ms)

                                                
                                                
-- stdout --
	multinode-048993
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048993-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048993-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:27:15.440106   43072 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:27:15.440200   43072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:27:15.440212   43072 out.go:358] Setting ErrFile to fd 2...
	I0818 19:27:15.440216   43072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:27:15.440385   43072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:27:15.440542   43072 out.go:352] Setting JSON to false
	I0818 19:27:15.440571   43072 mustload.go:65] Loading cluster: multinode-048993
	I0818 19:27:15.440628   43072 notify.go:220] Checking for updates...
	I0818 19:27:15.440901   43072 config.go:182] Loaded profile config "multinode-048993": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:27:15.440914   43072 status.go:255] checking status of multinode-048993 ...
	I0818 19:27:15.441306   43072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:27:15.441363   43072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:27:15.461254   43072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41087
	I0818 19:27:15.461653   43072 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:27:15.462303   43072 main.go:141] libmachine: Using API Version  1
	I0818 19:27:15.462335   43072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:27:15.462658   43072 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:27:15.462840   43072 main.go:141] libmachine: (multinode-048993) Calling .GetState
	I0818 19:27:15.464473   43072 status.go:330] multinode-048993 host status = "Running" (err=<nil>)
	I0818 19:27:15.464491   43072 host.go:66] Checking if "multinode-048993" exists ...
	I0818 19:27:15.464790   43072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:27:15.464827   43072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:27:15.479491   43072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40161
	I0818 19:27:15.479869   43072 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:27:15.480382   43072 main.go:141] libmachine: Using API Version  1
	I0818 19:27:15.480408   43072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:27:15.480723   43072 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:27:15.480940   43072 main.go:141] libmachine: (multinode-048993) Calling .GetIP
	I0818 19:27:15.483640   43072 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:27:15.483892   43072 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:27:15.483914   43072 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:27:15.484048   43072 host.go:66] Checking if "multinode-048993" exists ...
	I0818 19:27:15.484321   43072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:27:15.484354   43072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:27:15.499214   43072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43537
	I0818 19:27:15.499690   43072 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:27:15.500108   43072 main.go:141] libmachine: Using API Version  1
	I0818 19:27:15.500129   43072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:27:15.500451   43072 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:27:15.500707   43072 main.go:141] libmachine: (multinode-048993) Calling .DriverName
	I0818 19:27:15.500941   43072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:27:15.500961   43072 main.go:141] libmachine: (multinode-048993) Calling .GetSSHHostname
	I0818 19:27:15.504159   43072 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:27:15.504617   43072 main.go:141] libmachine: (multinode-048993) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:ba:a0", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:24:30 +0000 UTC Type:0 Mac:52:54:00:6f:ba:a0 Iaid: IPaddr:192.168.39.185 Prefix:24 Hostname:multinode-048993 Clientid:01:52:54:00:6f:ba:a0}
	I0818 19:27:15.504640   43072 main.go:141] libmachine: (multinode-048993) DBG | domain multinode-048993 has defined IP address 192.168.39.185 and MAC address 52:54:00:6f:ba:a0 in network mk-multinode-048993
	I0818 19:27:15.504812   43072 main.go:141] libmachine: (multinode-048993) Calling .GetSSHPort
	I0818 19:27:15.504963   43072 main.go:141] libmachine: (multinode-048993) Calling .GetSSHKeyPath
	I0818 19:27:15.505105   43072 main.go:141] libmachine: (multinode-048993) Calling .GetSSHUsername
	I0818 19:27:15.505210   43072 sshutil.go:53] new ssh client: &{IP:192.168.39.185 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993/id_rsa Username:docker}
	I0818 19:27:15.590802   43072 ssh_runner.go:195] Run: systemctl --version
	I0818 19:27:15.597018   43072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:27:15.615344   43072 kubeconfig.go:125] found "multinode-048993" server: "https://192.168.39.185:8443"
	I0818 19:27:15.615372   43072 api_server.go:166] Checking apiserver status ...
	I0818 19:27:15.615441   43072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:27:15.630084   43072 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1107/cgroup
	W0818 19:27:15.639135   43072 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1107/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:27:15.639197   43072 ssh_runner.go:195] Run: ls
	I0818 19:27:15.643822   43072 api_server.go:253] Checking apiserver healthz at https://192.168.39.185:8443/healthz ...
	I0818 19:27:15.648698   43072 api_server.go:279] https://192.168.39.185:8443/healthz returned 200:
	ok
	I0818 19:27:15.648717   43072 status.go:422] multinode-048993 apiserver status = Running (err=<nil>)
	I0818 19:27:15.648728   43072 status.go:257] multinode-048993 status: &{Name:multinode-048993 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:27:15.648747   43072 status.go:255] checking status of multinode-048993-m02 ...
	I0818 19:27:15.649159   43072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:27:15.649213   43072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:27:15.663986   43072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43715
	I0818 19:27:15.664315   43072 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:27:15.664769   43072 main.go:141] libmachine: Using API Version  1
	I0818 19:27:15.664789   43072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:27:15.665063   43072 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:27:15.665214   43072 main.go:141] libmachine: (multinode-048993-m02) Calling .GetState
	I0818 19:27:15.666556   43072 status.go:330] multinode-048993-m02 host status = "Running" (err=<nil>)
	I0818 19:27:15.666570   43072 host.go:66] Checking if "multinode-048993-m02" exists ...
	I0818 19:27:15.666856   43072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:27:15.666915   43072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:27:15.680885   43072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0818 19:27:15.681254   43072 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:27:15.681676   43072 main.go:141] libmachine: Using API Version  1
	I0818 19:27:15.681698   43072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:27:15.681971   43072 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:27:15.682103   43072 main.go:141] libmachine: (multinode-048993-m02) Calling .GetIP
	I0818 19:27:15.684396   43072 main.go:141] libmachine: (multinode-048993-m02) DBG | domain multinode-048993-m02 has defined MAC address 52:54:00:7e:e2:2a in network mk-multinode-048993
	I0818 19:27:15.684816   43072 main.go:141] libmachine: (multinode-048993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e2:2a", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:25:36 +0000 UTC Type:0 Mac:52:54:00:7e:e2:2a Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-048993-m02 Clientid:01:52:54:00:7e:e2:2a}
	I0818 19:27:15.684855   43072 main.go:141] libmachine: (multinode-048993-m02) DBG | domain multinode-048993-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:7e:e2:2a in network mk-multinode-048993
	I0818 19:27:15.684952   43072 host.go:66] Checking if "multinode-048993-m02" exists ...
	I0818 19:27:15.685291   43072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:27:15.685322   43072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:27:15.699103   43072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40283
	I0818 19:27:15.699431   43072 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:27:15.699802   43072 main.go:141] libmachine: Using API Version  1
	I0818 19:27:15.699821   43072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:27:15.700140   43072 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:27:15.700308   43072 main.go:141] libmachine: (multinode-048993-m02) Calling .DriverName
	I0818 19:27:15.700444   43072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:27:15.700469   43072 main.go:141] libmachine: (multinode-048993-m02) Calling .GetSSHHostname
	I0818 19:27:15.702704   43072 main.go:141] libmachine: (multinode-048993-m02) DBG | domain multinode-048993-m02 has defined MAC address 52:54:00:7e:e2:2a in network mk-multinode-048993
	I0818 19:27:15.703078   43072 main.go:141] libmachine: (multinode-048993-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:e2:2a", ip: ""} in network mk-multinode-048993: {Iface:virbr1 ExpiryTime:2024-08-18 20:25:36 +0000 UTC Type:0 Mac:52:54:00:7e:e2:2a Iaid: IPaddr:192.168.39.14 Prefix:24 Hostname:multinode-048993-m02 Clientid:01:52:54:00:7e:e2:2a}
	I0818 19:27:15.703106   43072 main.go:141] libmachine: (multinode-048993-m02) DBG | domain multinode-048993-m02 has defined IP address 192.168.39.14 and MAC address 52:54:00:7e:e2:2a in network mk-multinode-048993
	I0818 19:27:15.703235   43072 main.go:141] libmachine: (multinode-048993-m02) Calling .GetSSHPort
	I0818 19:27:15.703398   43072 main.go:141] libmachine: (multinode-048993-m02) Calling .GetSSHKeyPath
	I0818 19:27:15.703540   43072 main.go:141] libmachine: (multinode-048993-m02) Calling .GetSSHUsername
	I0818 19:27:15.703670   43072 sshutil.go:53] new ssh client: &{IP:192.168.39.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-7747/.minikube/machines/multinode-048993-m02/id_rsa Username:docker}
	I0818 19:27:15.782642   43072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:27:15.796221   43072 status.go:257] multinode-048993-m02 status: &{Name:multinode-048993-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:27:15.796252   43072 status.go:255] checking status of multinode-048993-m03 ...
	I0818 19:27:15.796666   43072 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0818 19:27:15.796711   43072 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:27:15.811721   43072 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0818 19:27:15.812166   43072 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:27:15.812629   43072 main.go:141] libmachine: Using API Version  1
	I0818 19:27:15.812650   43072 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:27:15.812998   43072 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:27:15.813184   43072 main.go:141] libmachine: (multinode-048993-m03) Calling .GetState
	I0818 19:27:15.814625   43072 status.go:330] multinode-048993-m03 host status = "Stopped" (err=<nil>)
	I0818 19:27:15.814641   43072 status.go:343] host is not running, skipping remaining checks
	I0818 19:27:15.814648   43072 status.go:257] multinode-048993-m03 status: &{Name:multinode-048993-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-048993 node start m03 -v=7 --alsologtostderr: (39.163092499s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-048993 node delete m03: (1.797421438s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (188.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048993 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0818 19:36:44.019057   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048993 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m7.755048696s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-048993 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (188.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-048993
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048993-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-048993-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.12321ms)

                                                
                                                
-- stdout --
	* [multinode-048993-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-048993-m02' is duplicated with machine name 'multinode-048993-m02' in profile 'multinode-048993'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-048993-m03 --driver=kvm2  --container-runtime=crio
E0818 19:39:26.646541   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-048993-m03 --driver=kvm2  --container-runtime=crio: (42.70610033s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-048993
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-048993: exit status 80 (205.796596ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-048993 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-048993-m03 already exists in multinode-048993-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-048993-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.75s)

                                                
                                    
x
+
TestScheduledStopUnix (113.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-890063 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-890063 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.873733625s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890063 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-890063 -n scheduled-stop-890063
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890063 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890063 --cancel-scheduled
E0818 19:46:44.018997   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-890063 -n scheduled-stop-890063
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-890063
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-890063 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-890063
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-890063: exit status 7 (64.904357ms)

                                                
                                                
-- stdout --
	scheduled-stop-890063
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-890063 -n scheduled-stop-890063
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-890063 -n scheduled-stop-890063: exit status 7 (63.977523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-890063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-890063
--- PASS: TestScheduledStopUnix (113.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (235.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2056465327 start -p running-upgrade-319765 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2056465327 start -p running-upgrade-319765 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.59079617s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-319765 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-319765 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m46.654562506s)
helpers_test.go:175: Cleaning up "running-upgrade-319765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-319765
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-319765: (1.118295411s)
--- PASS: TestRunningBinaryUpgrade (235.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-288448 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-288448 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (77.113027ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-288448] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (95.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-288448 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-288448 --driver=kvm2  --container-runtime=crio: (1m35.034450955s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-288448 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (95.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-754609 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-754609 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (98.354842ms)

                                                
                                                
-- stdout --
	* [false-754609] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:48:50.788967   51810 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:48:50.789068   51810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:48:50.789076   51810 out.go:358] Setting ErrFile to fd 2...
	I0818 19:48:50.789081   51810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:48:50.789260   51810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-7747/.minikube/bin
	I0818 19:48:50.789789   51810 out.go:352] Setting JSON to false
	I0818 19:48:50.790672   51810 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5475,"bootTime":1724005056,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 19:48:50.790725   51810 start.go:139] virtualization: kvm guest
	I0818 19:48:50.792999   51810 out.go:177] * [false-754609] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 19:48:50.794329   51810 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:48:50.794335   51810 notify.go:220] Checking for updates...
	I0818 19:48:50.796842   51810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:48:50.798307   51810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-7747/kubeconfig
	I0818 19:48:50.799560   51810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-7747/.minikube
	I0818 19:48:50.800789   51810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 19:48:50.802018   51810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:48:50.803763   51810 config.go:182] Loaded profile config "NoKubernetes-288448": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:48:50.803884   51810 config.go:182] Loaded profile config "offline-crio-277219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0818 19:48:50.804000   51810 config.go:182] Loaded profile config "running-upgrade-319765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0818 19:48:50.804096   51810 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:48:50.841234   51810 out.go:177] * Using the kvm2 driver based on user configuration
	I0818 19:48:50.842525   51810 start.go:297] selected driver: kvm2
	I0818 19:48:50.842545   51810 start.go:901] validating driver "kvm2" against <nil>
	I0818 19:48:50.842559   51810 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:48:50.844641   51810 out.go:201] 
	W0818 19:48:50.845811   51810 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0818 19:48:50.846967   51810 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-754609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-754609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 18 Aug 2024 19:48:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.26:8443
name: offline-crio-277219
contexts:
- context:
cluster: offline-crio-277219
extensions:
- extension:
last-update: Sun, 18 Aug 2024 19:48:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: offline-crio-277219
name: offline-crio-277219
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-crio-277219
user:
client-certificate: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/offline-crio-277219/client.crt
client-key: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/offline-crio-277219/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-754609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-754609"

                                                
                                                
----------------------- debugLogs end: false-754609 [took: 2.740810398s] --------------------------------
helpers_test.go:175: Cleaning up "false-754609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-754609
--- PASS: TestNetworkPlugins/group/false (2.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (67.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-288448 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0818 19:49:26.646913   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-288448 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m6.452193661s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-288448 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-288448 status -o json: exit status 2 (222.233806ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-288448","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-288448
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-288448: (1.030569061s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (67.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-288448 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-288448 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.619793367s)
--- PASS: TestNoKubernetes/serial/Start (27.62s)

                                                
                                    
x
+
TestPause/serial/Start (90.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-147100 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-147100 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m30.612547625s)
--- PASS: TestPause/serial/Start (90.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-288448 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-288448 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.541495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (24.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (12.151476886s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (12.761622295s)
--- PASS: TestNoKubernetes/serial/ProfileList (24.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-288448
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-288448: (2.156320685s)
--- PASS: TestNoKubernetes/serial/Stop (2.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-288448 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-288448 --driver=kvm2  --container-runtime=crio: (22.070210418s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-288448 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-288448 "sudo systemctl is-active --quiet service kubelet": exit status 1 (188.039412ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3232741285 start -p stopped-upgrade-729585 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3232741285 start -p stopped-upgrade-729585 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (56.907851297s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3232741285 -p stopped-upgrade-729585 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3232741285 -p stopped-upgrade-729585 stop: (2.129030102s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-729585 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-729585 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.792997165s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m39.086846261s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (92.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m32.699918905s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (92.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-729585
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (106.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0818 19:54:26.647067   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m46.104489121s)
--- PASS: TestNetworkPlugins/group/calico/Start (106.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-754609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-754609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lnnm4" [cb6c645f-210c-4ca3-8b97-10ed1b4b8f6a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lnnm4" [cb6c645f-210c-4ca3-8b97-10ed1b4b8f6a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003778139s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mpdsv" [cce361e5-0661-46e7-93be-2efdf0c25af4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004012648s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-754609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-754609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-754609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dhf7g" [f4100ba7-e46f-417c-a8b5-fefae8153b98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dhf7g" [f4100ba7-e46f-417c-a8b5-fefae8153b98] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004014265s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-754609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m11.243436254s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (101.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m41.602338354s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (101.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-h5x52" [e65286c8-4f27-4ae8-b50a-6818f430c999] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004451153s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-754609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-754609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-22jbh" [73f67255-17da-48e3-aa7f-fbd88099306e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-22jbh" [73f67255-17da-48e3-aa7f-fbd88099306e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006059391s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-754609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m24.065109871s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-754609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-754609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5tc72" [e3aab277-8ef6-448b-a177-1cc05d8fe339] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5tc72" [e3aab277-8ef6-448b-a177-1cc05d8fe339] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004634009s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-754609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-754609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.559958132s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-754609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-754609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hw8lt" [073d26e8-b7b1-4ded-9b0d-25b67e57d5f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hw8lt" [073d26e8-b7b1-4ded-9b0d-25b67e57d5f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00486075s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-754609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dwtnh" [0ed1005b-b456-490c-aa65-4be851e1dcdd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00512847s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-754609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-754609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f85c4" [565192ce-36bf-4622-bec7-4eb64f05c6da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f85c4" [565192ce-36bf-4622-bec7-4eb64f05c6da] Running
E0818 19:58:07.089691   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.005607897s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-754609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-754609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-754609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9w5v2" [ded20f35-eb81-41c2-bb2f-0d27d68d2d75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9w5v2" [ded20f35-eb81-41c2-bb2f-0d27d68d2d75] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004437546s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (109.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-944426 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-944426 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m49.718501969s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (109.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-754609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-754609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E0818 20:28:16.211655   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (112.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-291295 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-291295 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m52.170008715s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (112.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (86.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-868662 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0818 19:59:26.646368   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:07.765865   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:07.772265   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:07.783621   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:07.805072   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:07.846584   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:07.928097   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:08.090326   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:08.412091   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:09.054402   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:10.336003   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-868662 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m26.070891499s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (86.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-868662 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-868662 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.253408765s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-868662 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-868662 --alsologtostderr -v=3: (11.337849025s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-944426 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8e252dc5-cc67-484b-9b0e-9ffffbaebdf4] Pending
E0818 20:00:12.897833   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [8e252dc5-cc67-484b-9b0e-9ffffbaebdf4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0818 20:00:13.771534   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:13.777875   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:13.789259   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:13.810751   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:13.852205   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:13.933653   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:14.095159   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:14.416652   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:15.058457   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:16.340637   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [8e252dc5-cc67-484b-9b0e-9ffffbaebdf4] Running
E0818 20:00:18.019306   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:18.902806   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003914326s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-944426 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-291295 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7e8d67dd-2341-44a9-8d8b-547e401ce22c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7e8d67dd-2341-44a9-8d8b-547e401ce22c] Running
E0818 20:00:24.024739   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:00:28.261217   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0037619s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-291295 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-944426 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-944426 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.001990984s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-944426 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-868662 -n newest-cni-868662
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-868662 -n newest-cni-868662: exit status 7 (70.67414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-868662 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-868662 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-868662 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (38.11874365s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-868662 -n newest-cni-868662
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-291295 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-291295 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-868662 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-868662 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-868662 --alsologtostderr -v=1: (1.796608008s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-868662 -n newest-cni-868662
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-868662 -n newest-cni-868662: exit status 2 (356.94152ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-868662 -n newest-cni-868662
E0818 20:01:04.204106   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-868662 -n newest-cni-868662: exit status 2 (268.035635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-868662 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-868662 --alsologtostderr -v=1: (1.064192155s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-868662 -n newest-cni-868662
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-868662 -n newest-cni-868662
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-852598 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0818 20:01:14.446333   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:29.704998   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:34.927947   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:35.710288   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:44.019044   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:46.286034   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:46.292377   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:46.303707   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:46.325440   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:46.366833   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:46.448263   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:46.610212   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:46.931904   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:47.573613   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:48.855505   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:51.417512   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:01:56.539178   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-852598 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (53.439379855s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-852598 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4eb1ca59-74a3-4f4d-a1fd-62f0c5620af4] Pending
helpers_test.go:344: "busybox" [4eb1ca59-74a3-4f4d-a1fd-62f0c5620af4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4eb1ca59-74a3-4f4d-a1fd-62f0c5620af4] Running
E0818 20:02:06.780557   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.006882047s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-852598 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-852598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-852598 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (649.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-944426 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0818 20:02:56.832372   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:02:57.632509   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-944426 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m49.702754409s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944426 -n no-preload-944426
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (649.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (603.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-291295 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0818 20:03:08.224326   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:10.620985   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:12.196306   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:16.211962   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:16.218353   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:16.229715   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:16.251095   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:16.292689   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:16.374166   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:16.535750   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:16.857672   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:17.499724   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:18.781511   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:21.343445   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:26.465444   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:32.677771   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:36.706720   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:37.811006   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:51.582338   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:03:57.188771   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-291295 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m2.836396222s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291295 -n embed-certs-291295
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (603.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-247539 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-247539 --alsologtostderr -v=3: (4.288520211s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247539 -n old-k8s-version-247539: exit status 7 (63.63121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-247539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (547s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-852598 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0818 20:05:07.765639   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:05:13.503992   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:05:13.771814   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:05:35.468661   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:05:35.561184   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:05:41.474637   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:05:49.715958   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:05:53.949589   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:06:00.071537   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:06:21.653059   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:06:44.019307   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:06:46.285519   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:07:13.988093   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:07:29.643157   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:07:51.701373   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:07:57.345576   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:08:16.212510   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:08:19.402825   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:08:43.913362   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/bridge-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:09:26.646311   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/functional-159278/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:10:07.765891   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/auto-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:10:13.772027   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/kindnet-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:10:53.949030   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/calico-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:11:44.018733   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/addons-483094/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:11:46.285835   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/custom-flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:12:29.643142   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/enable-default-cni-754609/client.crt: no such file or directory" logger="UnhandledError"
E0818 20:12:51.701036   14934 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/flannel-754609/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-852598 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m6.740667508s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-852598 -n default-k8s-diff-port-852598
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (547.00s)

                                                
                                    

Test skip (37/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
120 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
122 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 2.61
262 TestNetworkPlugins/group/cilium 3.77
276 TestStartStop/group/disable-driver-mounts 0.13
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-754609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-754609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 18 Aug 2024 19:48:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.26:8443
name: offline-crio-277219
contexts:
- context:
cluster: offline-crio-277219
extensions:
- extension:
last-update: Sun, 18 Aug 2024 19:48:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: offline-crio-277219
name: offline-crio-277219
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-crio-277219
user:
client-certificate: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/offline-crio-277219/client.crt
client-key: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/offline-crio-277219/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-754609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-754609"

                                                
                                                
----------------------- debugLogs end: kubenet-754609 [took: 2.475219358s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-754609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-754609
--- SKIP: TestNetworkPlugins/group/kubenet (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-754609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-754609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19423-7747/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 18 Aug 2024 19:48:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.39.26:8443
name: offline-crio-277219
contexts:
- context:
cluster: offline-crio-277219
extensions:
- extension:
last-update: Sun, 18 Aug 2024 19:48:22 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: offline-crio-277219
name: offline-crio-277219
current-context: ""
kind: Config
preferences: {}
users:
- name: offline-crio-277219
user:
client-certificate: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/offline-crio-277219/client.crt
client-key: /home/jenkins/minikube-integration/19423-7747/.minikube/profiles/offline-crio-277219/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-754609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-754609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-754609"

                                                
                                                
----------------------- debugLogs end: cilium-754609 [took: 3.612994919s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-754609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-754609
--- SKIP: TestNetworkPlugins/group/cilium (3.77s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-675510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-675510
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
Copied to clipboard